00:00:00.001 Started by upstream project "autotest-per-patch" build number 132695 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.028 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.029 The recommended git tool is: git 00:00:00.029 using credential 00000000-0000-0000-0000-000000000002 00:00:00.031 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.054 Fetching changes from the remote Git repository 00:00:00.059 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.092 Using shallow fetch with depth 1 00:00:00.092 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.092 > git --version # timeout=10 00:00:00.129 > git --version # 'git version 2.39.2' 00:00:00.129 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.157 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.157 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.299 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.312 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.324 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.324 > git config core.sparsecheckout # timeout=10 00:00:05.335 > git read-tree -mu HEAD # timeout=10 00:00:05.349 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.379 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.379 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.490 [Pipeline] Start of Pipeline 00:00:05.505 [Pipeline] library 00:00:05.507 Loading library shm_lib@master 00:00:05.507 Library shm_lib@master is cached. Copying from home. 00:00:05.548 [Pipeline] node 00:00:05.566 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.567 [Pipeline] { 00:00:05.576 [Pipeline] catchError 00:00:05.577 [Pipeline] { 00:00:05.588 [Pipeline] wrap 00:00:05.593 [Pipeline] { 00:00:05.598 [Pipeline] stage 00:00:05.600 [Pipeline] { (Prologue) 00:00:05.784 [Pipeline] sh 00:00:06.071 + logger -p user.info -t JENKINS-CI 00:00:06.085 [Pipeline] echo 00:00:06.086 Node: WFP6 00:00:06.092 [Pipeline] sh 00:00:06.392 [Pipeline] setCustomBuildProperty 00:00:06.406 [Pipeline] echo 00:00:06.408 Cleanup processes 00:00:06.414 [Pipeline] sh 00:00:06.698 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.698 388942 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.712 [Pipeline] sh 00:00:06.998 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.998 ++ grep -v 'sudo pgrep' 00:00:06.998 ++ awk '{print $1}' 00:00:06.998 + sudo kill -9 00:00:06.998 + true 00:00:07.013 [Pipeline] cleanWs 00:00:07.021 [WS-CLEANUP] Deleting project workspace... 00:00:07.021 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.027 [WS-CLEANUP] done 00:00:07.032 [Pipeline] setCustomBuildProperty 00:00:07.052 [Pipeline] sh 00:00:07.337 + sudo git config --global --replace-all safe.directory '*' 00:00:07.429 [Pipeline] httpRequest 00:00:08.073 [Pipeline] echo 00:00:08.075 Sorcerer 10.211.164.20 is alive 00:00:08.084 [Pipeline] retry 00:00:08.086 [Pipeline] { 00:00:08.100 [Pipeline] httpRequest 00:00:08.104 HttpMethod: GET 00:00:08.104 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.105 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.112 Response Code: HTTP/1.1 200 OK 00:00:08.112 Success: Status code 200 is in the accepted range: 200,404 00:00:08.112 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.472 [Pipeline] } 00:00:25.490 [Pipeline] // retry 00:00:25.499 [Pipeline] sh 00:00:25.784 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.801 [Pipeline] httpRequest 00:00:26.181 [Pipeline] echo 00:00:26.189 Sorcerer 10.211.164.20 is alive 00:00:26.209 [Pipeline] retry 00:00:26.216 [Pipeline] { 00:00:26.234 [Pipeline] httpRequest 00:00:26.239 HttpMethod: GET 00:00:26.239 URL: http://10.211.164.20/packages/spdk_2cae84b3cd91427b94c20dfd39a930df25256880.tar.gz 00:00:26.240 Sending request to url: http://10.211.164.20/packages/spdk_2cae84b3cd91427b94c20dfd39a930df25256880.tar.gz 00:00:26.245 Response Code: HTTP/1.1 200 OK 00:00:26.245 Success: Status code 200 is in the accepted range: 200,404 00:00:26.246 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2cae84b3cd91427b94c20dfd39a930df25256880.tar.gz 00:05:28.408 [Pipeline] } 00:05:28.425 [Pipeline] // retry 00:05:28.431 [Pipeline] sh 00:05:28.716 + tar --no-same-owner -xf spdk_2cae84b3cd91427b94c20dfd39a930df25256880.tar.gz 00:05:31.264 [Pipeline] sh 00:05:31.549 + git -C spdk log --oneline -n5 00:05:31.549 2cae84b3c lib/reduce: Support storing metadata on backing dev. (5 of 5, test cases) 00:05:31.549 a0b4fa764 lib/reduce: Support storing metadata on backing dev. (4 of 5, data unmap with async metadata) 00:05:31.549 080d93a73 lib/reduce: Support storing metadata on backing dev. (3 of 5, reload process) 00:05:31.549 62083ef48 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:05:31.549 289f56464 lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process) 00:05:31.560 [Pipeline] } 00:05:31.575 [Pipeline] // stage 00:05:31.584 [Pipeline] stage 00:05:31.586 [Pipeline] { (Prepare) 00:05:31.601 [Pipeline] writeFile 00:05:31.618 [Pipeline] sh 00:05:31.901 + logger -p user.info -t JENKINS-CI 00:05:31.913 [Pipeline] sh 00:05:32.196 + logger -p user.info -t JENKINS-CI 00:05:32.208 [Pipeline] sh 00:05:32.499 + cat autorun-spdk.conf 00:05:32.499 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:32.499 SPDK_TEST_NVMF=1 00:05:32.499 SPDK_TEST_NVME_CLI=1 00:05:32.499 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:32.499 SPDK_TEST_NVMF_NICS=e810 00:05:32.499 SPDK_TEST_VFIOUSER=1 00:05:32.499 SPDK_RUN_UBSAN=1 00:05:32.499 NET_TYPE=phy 00:05:32.506 RUN_NIGHTLY=0 00:05:32.509 [Pipeline] readFile 00:05:32.534 [Pipeline] withEnv 00:05:32.536 [Pipeline] { 00:05:32.550 [Pipeline] sh 00:05:32.924 + set -ex 00:05:32.925 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:32.925 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:32.925 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:32.925 ++ SPDK_TEST_NVMF=1 00:05:32.925 ++ SPDK_TEST_NVME_CLI=1 00:05:32.925 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:32.925 ++ SPDK_TEST_NVMF_NICS=e810 00:05:32.925 ++ SPDK_TEST_VFIOUSER=1 00:05:32.925 ++ SPDK_RUN_UBSAN=1 00:05:32.925 ++ NET_TYPE=phy 00:05:32.925 ++ RUN_NIGHTLY=0 00:05:32.925 + case $SPDK_TEST_NVMF_NICS in 00:05:32.925 + DRIVERS=ice 00:05:32.925 + [[ tcp == \r\d\m\a ]] 00:05:32.925 + [[ -n ice ]] 00:05:32.925 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:32.925 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:32.925 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:05:32.925 rmmod: ERROR: Module irdma is not currently loaded 00:05:32.925 rmmod: ERROR: Module i40iw is not currently loaded 00:05:32.925 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:32.925 + true 00:05:32.925 + for D in $DRIVERS 00:05:32.925 + sudo modprobe ice 00:05:32.925 + exit 0 00:05:32.933 [Pipeline] } 00:05:32.947 [Pipeline] // withEnv 00:05:32.951 [Pipeline] } 00:05:32.964 [Pipeline] // stage 00:05:32.972 [Pipeline] catchError 00:05:32.974 [Pipeline] { 00:05:32.985 [Pipeline] timeout 00:05:32.985 Timeout set to expire in 1 hr 0 min 00:05:32.987 [Pipeline] { 00:05:33.000 [Pipeline] stage 00:05:33.002 [Pipeline] { (Tests) 00:05:33.015 [Pipeline] sh 00:05:33.297 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:33.297 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:33.297 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:33.297 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:33.297 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:33.297 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:33.297 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:33.297 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:33.298 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:33.298 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:33.298 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:05:33.298 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:33.298 + source /etc/os-release 00:05:33.298 ++ NAME='Fedora Linux' 00:05:33.298 ++ VERSION='39 (Cloud Edition)' 00:05:33.298 ++ ID=fedora 00:05:33.298 ++ VERSION_ID=39 00:05:33.298 ++ VERSION_CODENAME= 00:05:33.298 ++ PLATFORM_ID=platform:f39 00:05:33.298 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:33.298 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:33.298 ++ LOGO=fedora-logo-icon 00:05:33.298 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:33.298 ++ HOME_URL=https://fedoraproject.org/ 00:05:33.298 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:33.298 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:33.298 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:33.298 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:33.298 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:33.298 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:33.298 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:33.298 ++ SUPPORT_END=2024-11-12 00:05:33.298 ++ VARIANT='Cloud Edition' 00:05:33.298 ++ VARIANT_ID=cloud 00:05:33.298 + uname -a 00:05:33.298 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:33.298 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:35.827 Hugepages 00:05:35.827 node hugesize free / total 00:05:35.827 node0 1048576kB 0 / 0 00:05:35.827 node0 2048kB 0 / 0 00:05:35.827 node1 1048576kB 0 / 0 00:05:35.827 node1 2048kB 0 / 0 00:05:35.827 00:05:35.827 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:35.827 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:35.827 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:35.827 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:35.827 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:35.827 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:35.827 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:35.827 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:35.827 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:35.827 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:35.827 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:35.827 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:35.827 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:35.827 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:35.827 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:35.827 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:35.827 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:35.827 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:35.827 + rm -f /tmp/spdk-ld-path 00:05:35.827 + source autorun-spdk.conf 00:05:35.827 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:35.827 ++ SPDK_TEST_NVMF=1 00:05:35.827 ++ SPDK_TEST_NVME_CLI=1 00:05:35.827 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:35.827 ++ SPDK_TEST_NVMF_NICS=e810 00:05:35.827 ++ SPDK_TEST_VFIOUSER=1 00:05:35.827 ++ SPDK_RUN_UBSAN=1 00:05:35.827 ++ NET_TYPE=phy 00:05:35.827 ++ RUN_NIGHTLY=0 00:05:35.827 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:35.827 + [[ -n '' ]] 00:05:35.827 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:35.827 + for M in /var/spdk/build-*-manifest.txt 00:05:35.827 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:35.827 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:35.827 + for M in /var/spdk/build-*-manifest.txt 00:05:35.827 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:35.827 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:36.086 + for M in /var/spdk/build-*-manifest.txt 00:05:36.086 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:36.086 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:36.086 ++ uname 00:05:36.086 + [[ Linux == \L\i\n\u\x ]] 00:05:36.086 + sudo dmesg -T 00:05:36.086 + sudo dmesg --clear 00:05:36.086 + dmesg_pid=390922 00:05:36.086 + [[ Fedora Linux == FreeBSD ]] 00:05:36.086 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:36.086 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:36.086 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:36.086 + [[ -x /usr/src/fio-static/fio ]] 00:05:36.086 + export FIO_BIN=/usr/src/fio-static/fio 00:05:36.086 + FIO_BIN=/usr/src/fio-static/fio 00:05:36.086 + sudo dmesg -Tw 00:05:36.086 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:36.086 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:36.086 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:36.086 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:36.086 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:36.086 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:36.086 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:36.086 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:36.086 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:36.086 13:37:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:36.086 13:37:18 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:36.086 13:37:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:36.086 13:37:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:05:36.086 13:37:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:05:36.086 13:37:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:36.086 13:37:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:05:36.086 13:37:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:05:36.086 13:37:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:05:36.086 13:37:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:05:36.086 13:37:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:05:36.086 13:37:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:36.086 13:37:18 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:36.086 13:37:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:36.086 13:37:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:36.086 13:37:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:36.086 13:37:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:36.086 13:37:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.086 13:37:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.086 13:37:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.086 13:37:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.086 13:37:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.086 13:37:18 -- paths/export.sh@5 -- $ export PATH 00:05:36.086 13:37:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.086 13:37:18 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:36.086 13:37:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:36.086 13:37:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733402238.XXXXXX 00:05:36.086 13:37:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733402238.wVqqwj 00:05:36.086 13:37:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:36.087 13:37:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:36.087 13:37:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:36.087 13:37:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:36.087 13:37:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:36.087 13:37:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:36.087 13:37:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:36.087 13:37:18 -- common/autotest_common.sh@10 -- $ set +x 00:05:36.087 13:37:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:05:36.087 13:37:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:36.087 13:37:18 -- pm/common@17 -- $ local monitor 00:05:36.087 13:37:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:36.087 13:37:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:36.346 13:37:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:36.346 13:37:18 -- pm/common@21 -- $ date +%s 00:05:36.346 13:37:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:36.346 13:37:18 -- pm/common@21 -- $ date +%s 00:05:36.346 13:37:18 -- pm/common@25 -- $ sleep 1 00:05:36.346 13:37:18 -- pm/common@21 -- $ date +%s 00:05:36.346 13:37:18 -- pm/common@21 -- $ date +%s 00:05:36.346 13:37:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402238 00:05:36.346 13:37:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402238 00:05:36.346 13:37:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402238 00:05:36.346 13:37:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402238 00:05:36.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402238_collect-cpu-load.pm.log 00:05:36.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402238_collect-vmstat.pm.log 00:05:36.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402238_collect-cpu-temp.pm.log 00:05:36.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402238_collect-bmc-pm.bmc.pm.log 00:05:37.283 13:37:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:37.283 13:37:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:37.283 13:37:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:37.283 13:37:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:37.283 13:37:19 -- spdk/autobuild.sh@16 -- $ date -u 00:05:37.283 Thu Dec 5 12:37:19 PM UTC 2024 00:05:37.283 13:37:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:37.283 v25.01-pre-301-g2cae84b3c 00:05:37.283 13:37:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:37.283 13:37:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:37.283 13:37:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:37.283 13:37:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:37.283 13:37:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:37.283 13:37:19 -- common/autotest_common.sh@10 -- $ set +x 00:05:37.283 ************************************ 00:05:37.283 START TEST ubsan 00:05:37.283 ************************************ 00:05:37.283 13:37:19 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:37.283 using ubsan 00:05:37.283 00:05:37.283 real 0m0.000s 00:05:37.283 user 0m0.000s 00:05:37.283 sys 0m0.000s 00:05:37.283 13:37:19 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:37.283 13:37:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:37.283 ************************************ 00:05:37.283 END TEST ubsan 00:05:37.283 ************************************ 00:05:37.283 13:37:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:37.283 13:37:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:37.283 13:37:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:37.283 13:37:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:37.283 13:37:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:37.283 13:37:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:37.283 13:37:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:37.283 13:37:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:37.283 13:37:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:37.571 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:37.571 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:37.830 Using 'verbs' RDMA provider 00:05:51.172 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:03.385 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:03.385 Creating mk/config.mk...done. 00:06:03.385 Creating mk/cc.flags.mk...done. 00:06:03.385 Type 'make' to build. 00:06:03.385 13:37:45 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:06:03.385 13:37:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:03.385 13:37:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:03.385 13:37:45 -- common/autotest_common.sh@10 -- $ set +x 00:06:03.385 ************************************ 00:06:03.385 START TEST make 00:06:03.385 ************************************ 00:06:03.385 13:37:45 make -- common/autotest_common.sh@1129 -- $ make -j96 00:06:03.385 make[1]: Nothing to be done for 'all'. 00:06:04.325 The Meson build system 00:06:04.325 Version: 1.5.0 00:06:04.325 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:06:04.325 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:04.325 Build type: native build 00:06:04.325 Project name: libvfio-user 00:06:04.325 Project version: 0.0.1 00:06:04.325 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:04.325 C linker for the host machine: cc ld.bfd 2.40-14 00:06:04.325 Host machine cpu family: x86_64 00:06:04.325 Host machine cpu: x86_64 00:06:04.325 Run-time dependency threads found: YES 00:06:04.325 Library dl found: YES 00:06:04.325 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:04.325 Run-time dependency json-c found: YES 0.17 00:06:04.325 Run-time dependency cmocka found: YES 1.1.7 00:06:04.325 Program pytest-3 found: NO 00:06:04.325 Program flake8 found: NO 00:06:04.325 Program misspell-fixer found: NO 00:06:04.325 Program restructuredtext-lint found: NO 00:06:04.325 Program valgrind found: YES (/usr/bin/valgrind) 00:06:04.325 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:04.325 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:04.325 Compiler for C supports arguments -Wwrite-strings: YES 00:06:04.325 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:04.325 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:06:04.325 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:06:04.325 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:04.325 Build targets in project: 8 00:06:04.325 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:06:04.325 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:06:04.325 00:06:04.325 libvfio-user 0.0.1 00:06:04.325 00:06:04.325 User defined options 00:06:04.325 buildtype : debug 00:06:04.325 default_library: shared 00:06:04.325 libdir : /usr/local/lib 00:06:04.325 00:06:04.325 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:05.261 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:05.261 [1/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:06:05.261 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:06:05.261 [3/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:06:05.261 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:06:05.261 [5/37] Compiling C object samples/null.p/null.c.o 00:06:05.261 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:06:05.261 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:06:05.261 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:06:05.261 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:06:05.261 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:06:05.261 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:06:05.261 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:06:05.261 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:06:05.261 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:06:05.261 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:06:05.261 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:06:05.261 [17/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:06:05.261 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:06:05.261 [19/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:06:05.261 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:06:05.261 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:06:05.261 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:06:05.261 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:06:05.261 [24/37] Compiling C object samples/server.p/server.c.o 00:06:05.261 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:06:05.261 [26/37] Compiling C object samples/client.p/client.c.o 00:06:05.261 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:06:05.261 [28/37] Linking target samples/client 00:06:05.261 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:06:05.518 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:06:05.518 [31/37] Linking target test/unit_tests 00:06:05.518 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:06:05.518 [33/37] Linking target samples/lspci 00:06:05.518 [34/37] Linking target samples/server 00:06:05.518 [35/37] Linking target samples/null 00:06:05.518 [36/37] Linking target samples/shadow_ioeventfd_server 00:06:05.518 [37/37] Linking target samples/gpio-pci-idio-16 00:06:05.518 INFO: autodetecting backend as ninja 00:06:05.518 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:05.777 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:06.035 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:06.035 ninja: no work to do. 00:06:11.360 The Meson build system 00:06:11.360 Version: 1.5.0 00:06:11.360 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:06:11.360 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:06:11.360 Build type: native build 00:06:11.360 Program cat found: YES (/usr/bin/cat) 00:06:11.360 Project name: DPDK 00:06:11.360 Project version: 24.03.0 00:06:11.360 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:11.360 C linker for the host machine: cc ld.bfd 2.40-14 00:06:11.360 Host machine cpu family: x86_64 00:06:11.360 Host machine cpu: x86_64 00:06:11.360 Message: ## Building in Developer Mode ## 00:06:11.360 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:11.360 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:06:11.360 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:11.360 Program python3 found: YES (/usr/bin/python3) 00:06:11.360 Program cat found: YES (/usr/bin/cat) 00:06:11.360 Compiler for C supports arguments -march=native: YES 00:06:11.360 Checking for size of "void *" : 8 00:06:11.360 Checking for size of "void *" : 8 (cached) 00:06:11.360 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:11.360 Library m found: YES 00:06:11.360 Library numa found: YES 00:06:11.360 Has header "numaif.h" : YES 00:06:11.360 Library fdt found: NO 00:06:11.360 Library execinfo found: NO 00:06:11.360 Has header "execinfo.h" : YES 00:06:11.360 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:11.360 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:11.360 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:11.360 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:11.360 Run-time dependency openssl found: YES 3.1.1 00:06:11.360 Run-time dependency libpcap found: YES 1.10.4 00:06:11.361 Has header "pcap.h" with dependency libpcap: YES 00:06:11.361 Compiler for C supports arguments -Wcast-qual: YES 00:06:11.361 Compiler for C supports arguments -Wdeprecated: YES 00:06:11.361 Compiler for C supports arguments -Wformat: YES 00:06:11.361 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:11.361 Compiler for C supports arguments -Wformat-security: NO 00:06:11.361 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:11.361 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:11.361 Compiler for C supports arguments -Wnested-externs: YES 00:06:11.361 Compiler for C supports arguments -Wold-style-definition: YES 00:06:11.361 Compiler for C supports arguments -Wpointer-arith: YES 00:06:11.361 Compiler for C supports arguments -Wsign-compare: YES 00:06:11.361 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:11.361 Compiler for C supports arguments -Wundef: YES 00:06:11.361 Compiler for C supports arguments -Wwrite-strings: YES 00:06:11.361 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:11.361 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:11.361 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:11.361 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:11.361 Program objdump found: YES (/usr/bin/objdump) 00:06:11.361 Compiler for C supports arguments -mavx512f: YES 00:06:11.361 Checking if "AVX512 checking" compiles: YES 00:06:11.361 Fetching value of define "__SSE4_2__" : 1 00:06:11.361 Fetching value of define "__AES__" : 1 00:06:11.361 Fetching value of define "__AVX__" : 1 00:06:11.361 Fetching value of define "__AVX2__" : 1 00:06:11.361 Fetching value of define "__AVX512BW__" : 1 00:06:11.361 Fetching value of define "__AVX512CD__" : 1 00:06:11.361 Fetching value of define "__AVX512DQ__" : 1 00:06:11.361 Fetching value of define "__AVX512F__" : 1 00:06:11.361 Fetching value of define "__AVX512VL__" : 1 00:06:11.361 Fetching value of define "__PCLMUL__" : 1 00:06:11.361 Fetching value of define "__RDRND__" : 1 00:06:11.361 Fetching value of define "__RDSEED__" : 1 00:06:11.361 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:11.361 Fetching value of define "__znver1__" : (undefined) 00:06:11.361 Fetching value of define "__znver2__" : (undefined) 00:06:11.361 Fetching value of define "__znver3__" : (undefined) 00:06:11.361 Fetching value of define "__znver4__" : (undefined) 00:06:11.361 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:11.361 Message: lib/log: Defining dependency "log" 00:06:11.361 Message: lib/kvargs: Defining dependency "kvargs" 00:06:11.361 Message: lib/telemetry: Defining dependency "telemetry" 00:06:11.361 Checking for function "getentropy" : NO 00:06:11.361 Message: lib/eal: Defining dependency "eal" 00:06:11.361 Message: lib/ring: Defining dependency "ring" 00:06:11.361 Message: lib/rcu: Defining dependency "rcu" 00:06:11.361 Message: lib/mempool: Defining dependency "mempool" 00:06:11.361 Message: lib/mbuf: Defining dependency "mbuf" 00:06:11.361 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:11.361 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:11.361 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:11.361 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:11.361 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:11.361 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:06:11.361 Compiler for C supports arguments -mpclmul: YES 00:06:11.361 Compiler for C supports arguments -maes: YES 00:06:11.361 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:11.361 Compiler for C supports arguments -mavx512bw: YES 00:06:11.361 Compiler for C supports arguments -mavx512dq: YES 00:06:11.361 Compiler for C supports arguments -mavx512vl: YES 00:06:11.361 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:11.361 Compiler for C supports arguments -mavx2: YES 00:06:11.361 Compiler for C supports arguments -mavx: YES 00:06:11.361 Message: lib/net: Defining dependency "net" 00:06:11.361 Message: lib/meter: Defining dependency "meter" 00:06:11.361 Message: lib/ethdev: Defining dependency "ethdev" 00:06:11.361 Message: lib/pci: Defining dependency "pci" 00:06:11.361 Message: lib/cmdline: Defining dependency "cmdline" 00:06:11.361 Message: lib/hash: Defining dependency "hash" 00:06:11.361 Message: lib/timer: Defining dependency "timer" 00:06:11.361 Message: lib/compressdev: Defining dependency "compressdev" 00:06:11.361 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:11.361 Message: lib/dmadev: Defining dependency "dmadev" 00:06:11.361 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:11.361 Message: lib/power: Defining dependency "power" 00:06:11.361 Message: lib/reorder: Defining dependency "reorder" 00:06:11.361 Message: lib/security: Defining dependency "security" 00:06:11.361 Has header "linux/userfaultfd.h" : YES 00:06:11.361 Has header "linux/vduse.h" : YES 00:06:11.361 Message: lib/vhost: Defining dependency "vhost" 00:06:11.361 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:11.361 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:11.361 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:11.361 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:11.361 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:11.361 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:11.361 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:11.361 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:11.361 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:11.361 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:11.361 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:11.361 Configuring doxy-api-html.conf using configuration 00:06:11.361 Configuring doxy-api-man.conf using configuration 00:06:11.361 Program mandb found: YES (/usr/bin/mandb) 00:06:11.361 Program sphinx-build found: NO 00:06:11.361 Configuring rte_build_config.h using configuration 00:06:11.361 Message: 00:06:11.361 ================= 00:06:11.361 Applications Enabled 00:06:11.361 ================= 00:06:11.361 00:06:11.361 apps: 00:06:11.361 00:06:11.361 00:06:11.361 Message: 00:06:11.361 ================= 00:06:11.361 Libraries Enabled 00:06:11.361 ================= 00:06:11.361 00:06:11.361 libs: 00:06:11.361 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:11.361 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:11.361 cryptodev, dmadev, power, reorder, security, vhost, 00:06:11.361 00:06:11.361 Message: 00:06:11.361 =============== 00:06:11.361 Drivers Enabled 00:06:11.361 =============== 00:06:11.361 00:06:11.361 common: 00:06:11.361 00:06:11.361 bus: 00:06:11.361 pci, vdev, 00:06:11.361 mempool: 00:06:11.361 ring, 00:06:11.361 dma: 00:06:11.361 00:06:11.361 net: 00:06:11.361 00:06:11.361 crypto: 00:06:11.361 00:06:11.361 compress: 00:06:11.361 00:06:11.361 vdpa: 00:06:11.361 00:06:11.361 00:06:11.361 Message: 00:06:11.361 ================= 00:06:11.361 Content Skipped 00:06:11.361 ================= 00:06:11.361 00:06:11.361 apps: 00:06:11.361 dumpcap: explicitly disabled via build config 00:06:11.361 graph: explicitly disabled via build config 00:06:11.361 pdump: explicitly disabled via build config 00:06:11.361 proc-info: explicitly disabled via build config 00:06:11.361 test-acl: explicitly disabled via build config 00:06:11.361 test-bbdev: explicitly disabled via build config 00:06:11.361 test-cmdline: explicitly disabled via build config 00:06:11.361 test-compress-perf: explicitly disabled via build config 00:06:11.361 test-crypto-perf: explicitly disabled via build config 00:06:11.361 test-dma-perf: explicitly disabled via build config 00:06:11.361 test-eventdev: explicitly disabled via build config 00:06:11.361 test-fib: explicitly disabled via build config 00:06:11.361 test-flow-perf: explicitly disabled via build config 00:06:11.361 test-gpudev: explicitly disabled via build config 00:06:11.361 test-mldev: explicitly disabled via build config 00:06:11.361 test-pipeline: explicitly disabled via build config 00:06:11.361 test-pmd: explicitly disabled via build config 00:06:11.361 test-regex: explicitly disabled via build config 00:06:11.361 test-sad: explicitly disabled via build config 00:06:11.361 test-security-perf: explicitly disabled via build config 00:06:11.361 00:06:11.361 libs: 00:06:11.361 argparse: explicitly disabled via build config 00:06:11.361 metrics: explicitly disabled via build config 00:06:11.361 acl: explicitly disabled via build config 00:06:11.361 bbdev: explicitly disabled via build config 00:06:11.361 bitratestats: explicitly disabled via build config 00:06:11.361 bpf: explicitly disabled via build config 00:06:11.361 cfgfile: explicitly disabled via build config 00:06:11.361 distributor: explicitly disabled via build config 00:06:11.361 efd: explicitly disabled via build config 00:06:11.361 eventdev: explicitly disabled via build config 00:06:11.361 dispatcher: explicitly disabled via build config 00:06:11.361 gpudev: explicitly disabled via build config 00:06:11.361 gro: explicitly disabled via build config 00:06:11.361 gso: explicitly disabled via build config 00:06:11.361 ip_frag: explicitly disabled via build config 00:06:11.361 jobstats: explicitly disabled via build config 00:06:11.361 latencystats: explicitly disabled via build config 00:06:11.361 lpm: explicitly disabled via build config 00:06:11.361 member: explicitly disabled via build config 00:06:11.361 pcapng: explicitly disabled via build config 00:06:11.361 rawdev: explicitly disabled via build config 00:06:11.361 regexdev: explicitly disabled via build config 00:06:11.361 mldev: explicitly disabled via build config 00:06:11.361 rib: explicitly disabled via build config 00:06:11.361 sched: explicitly disabled via build config 00:06:11.361 stack: explicitly disabled via build config 00:06:11.361 ipsec: explicitly disabled via build config 00:06:11.361 pdcp: explicitly disabled via build config 00:06:11.361 fib: explicitly disabled via build config 00:06:11.361 port: explicitly disabled via build config 00:06:11.361 pdump: explicitly disabled via build config 00:06:11.361 table: explicitly disabled via build config 00:06:11.361 pipeline: explicitly disabled via build config 00:06:11.361 graph: explicitly disabled via build config 00:06:11.361 node: explicitly disabled via build config 00:06:11.361 00:06:11.361 drivers: 00:06:11.362 common/cpt: not in enabled drivers build config 00:06:11.362 common/dpaax: not in enabled drivers build config 00:06:11.362 common/iavf: not in enabled drivers build config 00:06:11.362 common/idpf: not in enabled drivers build config 00:06:11.362 common/ionic: not in enabled drivers build config 00:06:11.362 common/mvep: not in enabled drivers build config 00:06:11.362 common/octeontx: not in enabled drivers build config 00:06:11.362 bus/auxiliary: not in enabled drivers build config 00:06:11.362 bus/cdx: not in enabled drivers build config 00:06:11.362 bus/dpaa: not in enabled drivers build config 00:06:11.362 bus/fslmc: not in enabled drivers build config 00:06:11.362 bus/ifpga: not in enabled drivers build config 00:06:11.362 bus/platform: not in enabled drivers build config 00:06:11.362 bus/uacce: not in enabled drivers build config 00:06:11.362 bus/vmbus: not in enabled drivers build config 00:06:11.362 common/cnxk: not in enabled drivers build config 00:06:11.362 common/mlx5: not in enabled drivers build config 00:06:11.362 common/nfp: not in enabled drivers build config 00:06:11.362 common/nitrox: not in enabled drivers build config 00:06:11.362 common/qat: not in enabled drivers build config 00:06:11.362 common/sfc_efx: not in enabled drivers build config 00:06:11.362 mempool/bucket: not in enabled drivers build config 00:06:11.362 mempool/cnxk: not in enabled drivers build config 00:06:11.362 mempool/dpaa: not in enabled drivers build config 00:06:11.362 mempool/dpaa2: not in enabled drivers build config 00:06:11.362 mempool/octeontx: not in enabled drivers build config 00:06:11.362 mempool/stack: not in enabled drivers build config 00:06:11.362 dma/cnxk: not in enabled drivers build config 00:06:11.362 dma/dpaa: not in enabled drivers build config 00:06:11.362 dma/dpaa2: not in enabled drivers build config 00:06:11.362 dma/hisilicon: not in enabled drivers build config 00:06:11.362 dma/idxd: not in enabled drivers build config 00:06:11.362 dma/ioat: not in enabled drivers build config 00:06:11.362 dma/skeleton: not in enabled drivers build config 00:06:11.362 net/af_packet: not in enabled drivers build config 00:06:11.362 net/af_xdp: not in enabled drivers build config 00:06:11.362 net/ark: not in enabled drivers build config 00:06:11.362 net/atlantic: not in enabled drivers build config 00:06:11.362 net/avp: not in enabled drivers build config 00:06:11.362 net/axgbe: not in enabled drivers build config 00:06:11.362 net/bnx2x: not in enabled drivers build config 00:06:11.362 net/bnxt: not in enabled drivers build config 00:06:11.362 net/bonding: not in enabled drivers build config 00:06:11.362 net/cnxk: not in enabled drivers build config 00:06:11.362 net/cpfl: not in enabled drivers build config 00:06:11.362 net/cxgbe: not in enabled drivers build config 00:06:11.362 net/dpaa: not in enabled drivers build config 00:06:11.362 net/dpaa2: not in enabled drivers build config 00:06:11.362 net/e1000: not in enabled drivers build config 00:06:11.362 net/ena: not in enabled drivers build config 00:06:11.362 net/enetc: not in enabled drivers build config 00:06:11.362 net/enetfec: not in enabled drivers build config 00:06:11.362 net/enic: not in enabled drivers build config 00:06:11.362 net/failsafe: not in enabled drivers build config 00:06:11.362 net/fm10k: not in enabled drivers build config 00:06:11.362 net/gve: not in enabled drivers build config 00:06:11.362 net/hinic: not in enabled drivers build config 00:06:11.362 net/hns3: not in enabled drivers build config 00:06:11.362 net/i40e: not in enabled drivers build config 00:06:11.362 net/iavf: not in enabled drivers build config 00:06:11.362 net/ice: not in enabled drivers build config 00:06:11.362 net/idpf: not in enabled drivers build config 00:06:11.362 net/igc: not in enabled drivers build config 00:06:11.362 net/ionic: not in enabled drivers build config 00:06:11.362 net/ipn3ke: not in enabled drivers build config 00:06:11.362 net/ixgbe: not in enabled drivers build config 00:06:11.362 net/mana: not in enabled drivers build config 00:06:11.362 net/memif: not in enabled drivers build config 00:06:11.362 net/mlx4: not in enabled drivers build config 00:06:11.362 net/mlx5: not in enabled drivers build config 00:06:11.362 net/mvneta: not in enabled drivers build config 00:06:11.362 net/mvpp2: not in enabled drivers build config 00:06:11.362 net/netvsc: not in enabled drivers build config 00:06:11.362 net/nfb: not in enabled drivers build config 00:06:11.362 net/nfp: not in enabled drivers build config 00:06:11.362 net/ngbe: not in enabled drivers build config 00:06:11.362 net/null: not in enabled drivers build config 00:06:11.362 net/octeontx: not in enabled drivers build config 00:06:11.362 net/octeon_ep: not in enabled drivers build config 00:06:11.362 net/pcap: not in enabled drivers build config 00:06:11.362 net/pfe: not in enabled drivers build config 00:06:11.362 net/qede: not in enabled drivers build config 00:06:11.362 net/ring: not in enabled drivers build config 00:06:11.362 net/sfc: not in enabled drivers build config 00:06:11.362 net/softnic: not in enabled drivers build config 00:06:11.362 net/tap: not in enabled drivers build config 00:06:11.362 net/thunderx: not in enabled drivers build config 00:06:11.362 net/txgbe: not in enabled drivers build config 00:06:11.362 net/vdev_netvsc: not in enabled drivers build config 00:06:11.362 net/vhost: not in enabled drivers build config 00:06:11.362 net/virtio: not in enabled drivers build config 00:06:11.362 net/vmxnet3: not in enabled drivers build config 00:06:11.362 raw/*: missing internal dependency, "rawdev" 00:06:11.362 crypto/armv8: not in enabled drivers build config 00:06:11.362 crypto/bcmfs: not in enabled drivers build config 00:06:11.362 crypto/caam_jr: not in enabled drivers build config 00:06:11.362 crypto/ccp: not in enabled drivers build config 00:06:11.362 crypto/cnxk: not in enabled drivers build config 00:06:11.362 crypto/dpaa_sec: not in enabled drivers build config 00:06:11.362 crypto/dpaa2_sec: not in enabled drivers build config 00:06:11.362 crypto/ipsec_mb: not in enabled drivers build config 00:06:11.362 crypto/mlx5: not in enabled drivers build config 00:06:11.362 crypto/mvsam: not in enabled drivers build config 00:06:11.362 crypto/nitrox: not in enabled drivers build config 00:06:11.362 crypto/null: not in enabled drivers build config 00:06:11.362 crypto/octeontx: not in enabled drivers build config 00:06:11.362 crypto/openssl: not in enabled drivers build config 00:06:11.362 crypto/scheduler: not in enabled drivers build config 00:06:11.362 crypto/uadk: not in enabled drivers build config 00:06:11.362 crypto/virtio: not in enabled drivers build config 00:06:11.362 compress/isal: not in enabled drivers build config 00:06:11.362 compress/mlx5: not in enabled drivers build config 00:06:11.362 compress/nitrox: not in enabled drivers build config 00:06:11.362 compress/octeontx: not in enabled drivers build config 00:06:11.362 compress/zlib: not in enabled drivers build config 00:06:11.362 regex/*: missing internal dependency, "regexdev" 00:06:11.362 ml/*: missing internal dependency, "mldev" 00:06:11.362 vdpa/ifc: not in enabled drivers build config 00:06:11.362 vdpa/mlx5: not in enabled drivers build config 00:06:11.362 vdpa/nfp: not in enabled drivers build config 00:06:11.362 vdpa/sfc: not in enabled drivers build config 00:06:11.362 event/*: missing internal dependency, "eventdev" 00:06:11.362 baseband/*: missing internal dependency, "bbdev" 00:06:11.362 gpu/*: missing internal dependency, "gpudev" 00:06:11.362 00:06:11.362 00:06:11.362 Build targets in project: 85 00:06:11.362 00:06:11.362 DPDK 24.03.0 00:06:11.362 00:06:11.362 User defined options 00:06:11.362 buildtype : debug 00:06:11.362 default_library : shared 00:06:11.362 libdir : lib 00:06:11.362 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:11.362 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:11.362 c_link_args : 00:06:11.362 cpu_instruction_set: native 00:06:11.362 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:06:11.362 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:06:11.362 enable_docs : false 00:06:11.362 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:11.362 enable_kmods : false 00:06:11.362 max_lcores : 128 00:06:11.362 tests : false 00:06:11.362 00:06:11.362 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:11.623 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:06:11.889 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:11.889 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:11.889 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:11.889 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:11.889 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:11.889 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:11.889 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:11.889 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:11.889 [9/268] Linking static target lib/librte_kvargs.a 00:06:11.889 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:11.889 [11/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:11.889 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:11.889 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:11.889 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:11.889 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:11.889 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:11.889 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:11.889 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:12.147 [19/268] Linking static target lib/librte_log.a 00:06:12.147 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:12.147 [21/268] Linking static target lib/librte_pci.a 00:06:12.147 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:12.147 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:12.147 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:12.147 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:12.147 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:12.405 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:12.405 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:12.405 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:12.406 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:12.406 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:12.406 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:12.406 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:12.406 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:12.406 [35/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:12.406 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:12.406 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:12.406 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:12.406 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:12.406 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:12.406 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:12.406 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:12.406 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:12.406 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:12.406 [45/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:12.406 [46/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:12.406 [47/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:12.406 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:12.406 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:12.406 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:12.406 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:12.406 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:12.406 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:12.406 [54/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.406 [55/268] Linking static target lib/librte_meter.a 00:06:12.406 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:12.406 [57/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:12.406 [58/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:12.406 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:12.406 [60/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:12.406 [61/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:12.406 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:12.406 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:12.406 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:12.406 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:12.406 [66/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:12.406 [67/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:12.406 [68/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:12.406 [69/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:12.406 [70/268] Linking static target lib/librte_ring.a 00:06:12.406 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:12.406 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:12.406 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:12.406 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:12.406 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:12.406 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:12.406 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:12.406 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:12.406 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:12.406 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:12.406 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:12.406 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:12.406 [83/268] Linking static target lib/librte_telemetry.a 00:06:12.406 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:12.406 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:12.406 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:12.406 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:12.406 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:12.406 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:12.406 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:12.406 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:12.406 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:12.406 [93/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:12.406 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:12.663 [95/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:12.663 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:12.663 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:12.663 [98/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:12.663 [99/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:12.663 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:12.663 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:12.663 [102/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.663 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:12.663 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:12.663 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:12.663 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:12.663 [107/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:12.663 [108/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:12.663 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:12.663 [110/268] Linking static target lib/librte_net.a 00:06:12.663 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:12.663 [112/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:12.663 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:12.663 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:12.663 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:12.663 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:12.663 [117/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:12.663 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:12.663 [119/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:12.663 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:12.663 [121/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:12.663 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:12.663 [123/268] Linking static target lib/librte_mempool.a 00:06:12.663 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:12.663 [125/268] Linking static target lib/librte_rcu.a 00:06:12.663 [126/268] Linking static target lib/librte_eal.a 00:06:12.663 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:12.663 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:12.663 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:12.663 [130/268] Linking static target lib/librte_cmdline.a 00:06:12.663 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:12.663 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:12.663 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.663 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:12.663 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:12.663 [136/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:12.663 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.663 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.921 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:12.921 [140/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.921 [141/268] Linking target lib/librte_log.so.24.1 00:06:12.921 [142/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:12.921 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:12.921 [144/268] Linking static target lib/librte_timer.a 00:06:12.921 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:12.921 [146/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:12.921 [147/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:12.921 [148/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:12.921 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:12.921 [150/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:12.921 [151/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:12.921 [152/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.921 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:12.921 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:12.921 [155/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.921 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:12.921 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:12.921 [158/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:12.921 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:12.921 [160/268] Linking static target lib/librte_mbuf.a 00:06:12.921 [161/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:12.921 [162/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:12.921 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:12.921 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:12.921 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:12.921 [166/268] Linking target lib/librte_telemetry.so.24.1 00:06:12.921 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:12.921 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:12.921 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:12.921 [170/268] Linking target lib/librte_kvargs.so.24.1 00:06:12.921 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:12.921 [172/268] Linking static target lib/librte_power.a 00:06:12.921 [173/268] Linking static target lib/librte_dmadev.a 00:06:12.921 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:12.921 [175/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:12.921 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:12.921 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:12.921 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:12.921 [179/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:12.921 [180/268] Linking static target lib/librte_security.a 00:06:13.179 [181/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:13.179 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:13.179 [183/268] Linking static target lib/librte_compressdev.a 00:06:13.179 [184/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:13.179 [185/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:13.179 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:13.179 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:13.179 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:13.179 [189/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:13.179 [190/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:13.179 [191/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:13.179 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:13.179 [193/268] Linking static target lib/librte_hash.a 00:06:13.179 [194/268] Linking static target lib/librte_reorder.a 00:06:13.179 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:13.179 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:13.179 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:13.179 [198/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:13.179 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:13.179 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:13.179 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:13.179 [202/268] Linking static target drivers/librte_bus_vdev.a 00:06:13.179 [203/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.179 [204/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:13.438 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:13.438 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:13.438 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:13.438 [208/268] Linking static target drivers/librte_mempool_ring.a 00:06:13.438 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:13.438 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:13.438 [211/268] Linking static target drivers/librte_bus_pci.a 00:06:13.438 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.438 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:13.438 [214/268] Linking static target lib/librte_cryptodev.a 00:06:13.696 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.696 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.696 [217/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.696 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:13.696 [219/268] Linking static target lib/librte_ethdev.a 00:06:13.696 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.696 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.696 [222/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.696 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.954 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.954 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:13.954 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:14.212 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:14.777 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:15.035 [229/268] Linking static target lib/librte_vhost.a 00:06:15.293 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:16.663 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.973 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.540 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.540 [234/268] Linking target lib/librte_eal.so.24.1 00:06:22.796 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:22.796 [236/268] Linking target lib/librte_ring.so.24.1 00:06:22.796 [237/268] Linking target lib/librte_meter.so.24.1 00:06:22.796 [238/268] Linking target lib/librte_timer.so.24.1 00:06:22.796 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:22.796 [240/268] Linking target lib/librte_pci.so.24.1 00:06:22.796 [241/268] Linking target lib/librte_dmadev.so.24.1 00:06:23.053 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:23.053 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:23.053 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:23.053 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:23.053 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:23.053 [247/268] Linking target lib/librte_rcu.so.24.1 00:06:23.053 [248/268] Linking target lib/librte_mempool.so.24.1 00:06:23.053 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:23.053 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:23.053 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:23.053 [252/268] Linking target lib/librte_mbuf.so.24.1 00:06:23.053 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:23.311 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:23.311 [255/268] Linking target lib/librte_net.so.24.1 00:06:23.311 [256/268] Linking target lib/librte_reorder.so.24.1 00:06:23.311 [257/268] Linking target lib/librte_compressdev.so.24.1 00:06:23.311 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:06:23.570 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:23.571 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:23.571 [261/268] Linking target lib/librte_hash.so.24.1 00:06:23.571 [262/268] Linking target lib/librte_cmdline.so.24.1 00:06:23.571 [263/268] Linking target lib/librte_security.so.24.1 00:06:23.571 [264/268] Linking target lib/librte_ethdev.so.24.1 00:06:23.571 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:23.571 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:23.830 [267/268] Linking target lib/librte_power.so.24.1 00:06:23.830 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:23.830 INFO: autodetecting backend as ninja 00:06:23.830 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:06:36.035 CC lib/ut_mock/mock.o 00:06:36.035 CC lib/ut/ut.o 00:06:36.035 CC lib/log/log.o 00:06:36.035 CC lib/log/log_flags.o 00:06:36.035 CC lib/log/log_deprecated.o 00:06:36.035 LIB libspdk_log.a 00:06:36.035 LIB libspdk_ut.a 00:06:36.035 LIB libspdk_ut_mock.a 00:06:36.035 SO libspdk_log.so.7.1 00:06:36.035 SO libspdk_ut.so.2.0 00:06:36.035 SO libspdk_ut_mock.so.6.0 00:06:36.035 SYMLINK libspdk_log.so 00:06:36.035 SYMLINK libspdk_ut_mock.so 00:06:36.035 SYMLINK libspdk_ut.so 00:06:36.035 CC lib/util/base64.o 00:06:36.035 CC lib/util/bit_array.o 00:06:36.035 CC lib/ioat/ioat.o 00:06:36.035 CC lib/util/cpuset.o 00:06:36.035 CC lib/util/crc16.o 00:06:36.035 CC lib/util/crc32.o 00:06:36.035 CC lib/util/crc32c.o 00:06:36.035 CC lib/util/crc32_ieee.o 00:06:36.035 CC lib/dma/dma.o 00:06:36.035 CC lib/util/crc64.o 00:06:36.035 CC lib/util/dif.o 00:06:36.035 CXX lib/trace_parser/trace.o 00:06:36.035 CC lib/util/fd.o 00:06:36.035 CC lib/util/fd_group.o 00:06:36.035 CC lib/util/file.o 00:06:36.035 CC lib/util/iov.o 00:06:36.035 CC lib/util/math.o 00:06:36.035 CC lib/util/hexlify.o 00:06:36.035 CC lib/util/net.o 00:06:36.035 CC lib/util/pipe.o 00:06:36.035 CC lib/util/strerror_tls.o 00:06:36.035 CC lib/util/string.o 00:06:36.035 CC lib/util/uuid.o 00:06:36.035 CC lib/util/xor.o 00:06:36.035 CC lib/util/zipf.o 00:06:36.035 CC lib/util/md5.o 00:06:36.035 CC lib/vfio_user/host/vfio_user_pci.o 00:06:36.035 CC lib/vfio_user/host/vfio_user.o 00:06:36.035 LIB libspdk_dma.a 00:06:36.035 SO libspdk_dma.so.5.0 00:06:36.035 LIB libspdk_ioat.a 00:06:36.035 SO libspdk_ioat.so.7.0 00:06:36.035 SYMLINK libspdk_dma.so 00:06:36.035 SYMLINK libspdk_ioat.so 00:06:36.035 LIB libspdk_vfio_user.a 00:06:36.035 SO libspdk_vfio_user.so.5.0 00:06:36.035 SYMLINK libspdk_vfio_user.so 00:06:36.035 LIB libspdk_util.a 00:06:36.035 SO libspdk_util.so.10.1 00:06:36.035 SYMLINK libspdk_util.so 00:06:36.035 LIB libspdk_trace_parser.a 00:06:36.035 SO libspdk_trace_parser.so.6.0 00:06:36.035 SYMLINK libspdk_trace_parser.so 00:06:36.035 CC lib/vmd/vmd.o 00:06:36.035 CC lib/vmd/led.o 00:06:36.035 CC lib/env_dpdk/env.o 00:06:36.035 CC lib/env_dpdk/memory.o 00:06:36.035 CC lib/env_dpdk/pci.o 00:06:36.035 CC lib/env_dpdk/init.o 00:06:36.035 CC lib/env_dpdk/threads.o 00:06:36.035 CC lib/env_dpdk/pci_ioat.o 00:06:36.035 CC lib/env_dpdk/pci_virtio.o 00:06:36.035 CC lib/env_dpdk/pci_vmd.o 00:06:36.035 CC lib/env_dpdk/pci_idxd.o 00:06:36.035 CC lib/env_dpdk/pci_event.o 00:06:36.035 CC lib/json/json_parse.o 00:06:36.035 CC lib/env_dpdk/sigbus_handler.o 00:06:36.035 CC lib/env_dpdk/pci_dpdk.o 00:06:36.035 CC lib/json/json_util.o 00:06:36.035 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:36.035 CC lib/json/json_write.o 00:06:36.035 CC lib/rdma_utils/rdma_utils.o 00:06:36.035 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:36.035 CC lib/conf/conf.o 00:06:36.035 CC lib/idxd/idxd.o 00:06:36.035 CC lib/idxd/idxd_user.o 00:06:36.035 CC lib/idxd/idxd_kernel.o 00:06:36.035 LIB libspdk_conf.a 00:06:36.035 LIB libspdk_rdma_utils.a 00:06:36.035 SO libspdk_conf.so.6.0 00:06:36.035 LIB libspdk_json.a 00:06:36.294 SO libspdk_rdma_utils.so.1.0 00:06:36.294 SO libspdk_json.so.6.0 00:06:36.294 SYMLINK libspdk_conf.so 00:06:36.294 SYMLINK libspdk_rdma_utils.so 00:06:36.294 SYMLINK libspdk_json.so 00:06:36.294 LIB libspdk_idxd.a 00:06:36.294 LIB libspdk_vmd.a 00:06:36.294 SO libspdk_vmd.so.6.0 00:06:36.294 SO libspdk_idxd.so.12.1 00:06:36.552 SYMLINK libspdk_vmd.so 00:06:36.552 SYMLINK libspdk_idxd.so 00:06:36.552 CC lib/rdma_provider/common.o 00:06:36.552 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:36.552 CC lib/jsonrpc/jsonrpc_server.o 00:06:36.552 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:36.552 CC lib/jsonrpc/jsonrpc_client.o 00:06:36.552 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:36.811 LIB libspdk_rdma_provider.a 00:06:36.811 SO libspdk_rdma_provider.so.7.0 00:06:36.811 LIB libspdk_jsonrpc.a 00:06:36.811 SYMLINK libspdk_rdma_provider.so 00:06:36.811 SO libspdk_jsonrpc.so.6.0 00:06:36.811 SYMLINK libspdk_jsonrpc.so 00:06:36.811 LIB libspdk_env_dpdk.a 00:06:37.069 SO libspdk_env_dpdk.so.15.1 00:06:37.069 SYMLINK libspdk_env_dpdk.so 00:06:37.069 CC lib/rpc/rpc.o 00:06:37.328 LIB libspdk_rpc.a 00:06:37.328 SO libspdk_rpc.so.6.0 00:06:37.328 SYMLINK libspdk_rpc.so 00:06:37.894 CC lib/notify/notify.o 00:06:37.894 CC lib/notify/notify_rpc.o 00:06:37.894 CC lib/trace/trace.o 00:06:37.894 CC lib/keyring/keyring.o 00:06:37.894 CC lib/keyring/keyring_rpc.o 00:06:37.894 CC lib/trace/trace_flags.o 00:06:37.894 CC lib/trace/trace_rpc.o 00:06:37.894 LIB libspdk_notify.a 00:06:37.894 SO libspdk_notify.so.6.0 00:06:37.894 LIB libspdk_keyring.a 00:06:37.894 LIB libspdk_trace.a 00:06:37.894 SYMLINK libspdk_notify.so 00:06:37.894 SO libspdk_keyring.so.2.0 00:06:37.894 SO libspdk_trace.so.11.0 00:06:38.216 SYMLINK libspdk_keyring.so 00:06:38.216 SYMLINK libspdk_trace.so 00:06:38.474 CC lib/thread/thread.o 00:06:38.474 CC lib/thread/iobuf.o 00:06:38.474 CC lib/sock/sock.o 00:06:38.474 CC lib/sock/sock_rpc.o 00:06:38.734 LIB libspdk_sock.a 00:06:38.734 SO libspdk_sock.so.10.0 00:06:38.734 SYMLINK libspdk_sock.so 00:06:38.992 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:38.992 CC lib/nvme/nvme_ctrlr.o 00:06:38.992 CC lib/nvme/nvme_fabric.o 00:06:38.992 CC lib/nvme/nvme_ns_cmd.o 00:06:38.992 CC lib/nvme/nvme_ns.o 00:06:38.992 CC lib/nvme/nvme_pcie_common.o 00:06:38.992 CC lib/nvme/nvme_pcie.o 00:06:38.992 CC lib/nvme/nvme_qpair.o 00:06:38.992 CC lib/nvme/nvme.o 00:06:38.992 CC lib/nvme/nvme_quirks.o 00:06:38.992 CC lib/nvme/nvme_transport.o 00:06:38.992 CC lib/nvme/nvme_discovery.o 00:06:38.992 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:38.992 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:38.992 CC lib/nvme/nvme_tcp.o 00:06:38.992 CC lib/nvme/nvme_opal.o 00:06:38.992 CC lib/nvme/nvme_io_msg.o 00:06:38.992 CC lib/nvme/nvme_poll_group.o 00:06:38.992 CC lib/nvme/nvme_zns.o 00:06:38.992 CC lib/nvme/nvme_stubs.o 00:06:38.992 CC lib/nvme/nvme_auth.o 00:06:38.992 CC lib/nvme/nvme_cuse.o 00:06:38.992 CC lib/nvme/nvme_vfio_user.o 00:06:38.992 CC lib/nvme/nvme_rdma.o 00:06:39.558 LIB libspdk_thread.a 00:06:39.558 SO libspdk_thread.so.11.0 00:06:39.558 SYMLINK libspdk_thread.so 00:06:39.816 CC lib/vfu_tgt/tgt_endpoint.o 00:06:39.816 CC lib/vfu_tgt/tgt_rpc.o 00:06:39.816 CC lib/virtio/virtio_vhost_user.o 00:06:39.816 CC lib/virtio/virtio.o 00:06:39.816 CC lib/virtio/virtio_vfio_user.o 00:06:39.816 CC lib/virtio/virtio_pci.o 00:06:39.816 CC lib/init/json_config.o 00:06:39.816 CC lib/init/subsystem.o 00:06:39.816 CC lib/init/subsystem_rpc.o 00:06:39.816 CC lib/init/rpc.o 00:06:39.816 CC lib/accel/accel_rpc.o 00:06:39.816 CC lib/accel/accel_sw.o 00:06:39.816 CC lib/accel/accel.o 00:06:39.816 CC lib/blob/blobstore.o 00:06:39.816 CC lib/blob/request.o 00:06:39.816 CC lib/blob/zeroes.o 00:06:39.816 CC lib/blob/blob_bs_dev.o 00:06:39.816 CC lib/fsdev/fsdev.o 00:06:39.816 CC lib/fsdev/fsdev_rpc.o 00:06:39.816 CC lib/fsdev/fsdev_io.o 00:06:40.074 LIB libspdk_init.a 00:06:40.074 SO libspdk_init.so.6.0 00:06:40.074 LIB libspdk_vfu_tgt.a 00:06:40.074 LIB libspdk_virtio.a 00:06:40.074 SO libspdk_vfu_tgt.so.3.0 00:06:40.074 SYMLINK libspdk_init.so 00:06:40.074 SO libspdk_virtio.so.7.0 00:06:40.333 SYMLINK libspdk_vfu_tgt.so 00:06:40.333 SYMLINK libspdk_virtio.so 00:06:40.333 LIB libspdk_fsdev.a 00:06:40.591 SO libspdk_fsdev.so.2.0 00:06:40.591 CC lib/event/app.o 00:06:40.591 CC lib/event/reactor.o 00:06:40.591 CC lib/event/log_rpc.o 00:06:40.591 CC lib/event/app_rpc.o 00:06:40.591 CC lib/event/scheduler_static.o 00:06:40.591 SYMLINK libspdk_fsdev.so 00:06:40.591 LIB libspdk_accel.a 00:06:40.849 SO libspdk_accel.so.16.0 00:06:40.849 SYMLINK libspdk_accel.so 00:06:40.849 LIB libspdk_nvme.a 00:06:40.849 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:40.849 LIB libspdk_event.a 00:06:40.849 SO libspdk_event.so.14.0 00:06:40.849 SO libspdk_nvme.so.15.0 00:06:40.849 SYMLINK libspdk_event.so 00:06:41.107 SYMLINK libspdk_nvme.so 00:06:41.107 CC lib/bdev/bdev.o 00:06:41.107 CC lib/bdev/bdev_rpc.o 00:06:41.107 CC lib/bdev/bdev_zone.o 00:06:41.107 CC lib/bdev/part.o 00:06:41.107 CC lib/bdev/scsi_nvme.o 00:06:41.366 LIB libspdk_fuse_dispatcher.a 00:06:41.366 SO libspdk_fuse_dispatcher.so.1.0 00:06:41.366 SYMLINK libspdk_fuse_dispatcher.so 00:06:41.933 LIB libspdk_blob.a 00:06:42.191 SO libspdk_blob.so.12.0 00:06:42.191 SYMLINK libspdk_blob.so 00:06:42.451 CC lib/blobfs/blobfs.o 00:06:42.451 CC lib/blobfs/tree.o 00:06:42.451 CC lib/lvol/lvol.o 00:06:43.018 LIB libspdk_bdev.a 00:06:43.018 SO libspdk_bdev.so.17.0 00:06:43.018 LIB libspdk_blobfs.a 00:06:43.018 SO libspdk_blobfs.so.11.0 00:06:43.018 SYMLINK libspdk_bdev.so 00:06:43.018 SYMLINK libspdk_blobfs.so 00:06:43.018 LIB libspdk_lvol.a 00:06:43.276 SO libspdk_lvol.so.11.0 00:06:43.276 SYMLINK libspdk_lvol.so 00:06:43.536 CC lib/scsi/dev.o 00:06:43.536 CC lib/nvmf/ctrlr.o 00:06:43.536 CC lib/nbd/nbd.o 00:06:43.536 CC lib/scsi/lun.o 00:06:43.536 CC lib/nvmf/ctrlr_discovery.o 00:06:43.536 CC lib/scsi/port.o 00:06:43.536 CC lib/nbd/nbd_rpc.o 00:06:43.536 CC lib/nvmf/ctrlr_bdev.o 00:06:43.536 CC lib/nvmf/subsystem.o 00:06:43.536 CC lib/scsi/scsi.o 00:06:43.536 CC lib/ftl/ftl_core.o 00:06:43.536 CC lib/scsi/scsi_bdev.o 00:06:43.536 CC lib/nvmf/nvmf.o 00:06:43.536 CC lib/ublk/ublk.o 00:06:43.536 CC lib/ftl/ftl_init.o 00:06:43.536 CC lib/scsi/scsi_pr.o 00:06:43.536 CC lib/ftl/ftl_layout.o 00:06:43.536 CC lib/nvmf/nvmf_rpc.o 00:06:43.536 CC lib/ublk/ublk_rpc.o 00:06:43.536 CC lib/scsi/scsi_rpc.o 00:06:43.536 CC lib/nvmf/transport.o 00:06:43.536 CC lib/ftl/ftl_debug.o 00:06:43.536 CC lib/scsi/task.o 00:06:43.536 CC lib/nvmf/tcp.o 00:06:43.536 CC lib/ftl/ftl_io.o 00:06:43.536 CC lib/nvmf/stubs.o 00:06:43.536 CC lib/ftl/ftl_sb.o 00:06:43.536 CC lib/nvmf/mdns_server.o 00:06:43.536 CC lib/ftl/ftl_l2p.o 00:06:43.536 CC lib/nvmf/vfio_user.o 00:06:43.536 CC lib/ftl/ftl_l2p_flat.o 00:06:43.536 CC lib/ftl/ftl_nv_cache.o 00:06:43.536 CC lib/nvmf/rdma.o 00:06:43.536 CC lib/nvmf/auth.o 00:06:43.536 CC lib/ftl/ftl_band.o 00:06:43.536 CC lib/ftl/ftl_band_ops.o 00:06:43.536 CC lib/ftl/ftl_writer.o 00:06:43.536 CC lib/ftl/ftl_rq.o 00:06:43.536 CC lib/ftl/ftl_reloc.o 00:06:43.536 CC lib/ftl/ftl_l2p_cache.o 00:06:43.536 CC lib/ftl/ftl_p2l_log.o 00:06:43.536 CC lib/ftl/ftl_p2l.o 00:06:43.536 CC lib/ftl/mngt/ftl_mngt.o 00:06:43.536 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:43.536 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:43.536 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:43.536 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:43.536 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:43.536 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:43.536 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:43.536 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:43.536 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:43.536 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:43.536 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:43.536 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:43.536 CC lib/ftl/utils/ftl_md.o 00:06:43.536 CC lib/ftl/utils/ftl_conf.o 00:06:43.536 CC lib/ftl/utils/ftl_mempool.o 00:06:43.536 CC lib/ftl/utils/ftl_bitmap.o 00:06:43.536 CC lib/ftl/utils/ftl_property.o 00:06:43.536 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:43.536 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:43.536 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:43.536 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:43.536 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:43.536 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:43.536 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:43.536 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:43.536 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:43.536 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:43.536 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:43.536 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:43.536 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:43.536 CC lib/ftl/base/ftl_base_dev.o 00:06:43.536 CC lib/ftl/base/ftl_base_bdev.o 00:06:43.536 CC lib/ftl/ftl_trace.o 00:06:44.102 LIB libspdk_nbd.a 00:06:44.102 LIB libspdk_scsi.a 00:06:44.102 SO libspdk_nbd.so.7.0 00:06:44.102 SO libspdk_scsi.so.9.0 00:06:44.102 SYMLINK libspdk_nbd.so 00:06:44.102 SYMLINK libspdk_scsi.so 00:06:44.102 LIB libspdk_ublk.a 00:06:44.102 SO libspdk_ublk.so.3.0 00:06:44.361 SYMLINK libspdk_ublk.so 00:06:44.361 LIB libspdk_ftl.a 00:06:44.361 CC lib/iscsi/conn.o 00:06:44.361 CC lib/vhost/vhost.o 00:06:44.361 CC lib/iscsi/init_grp.o 00:06:44.361 CC lib/vhost/vhost_rpc.o 00:06:44.361 CC lib/vhost/vhost_scsi.o 00:06:44.361 CC lib/iscsi/iscsi.o 00:06:44.361 CC lib/iscsi/param.o 00:06:44.361 CC lib/vhost/vhost_blk.o 00:06:44.361 CC lib/vhost/rte_vhost_user.o 00:06:44.361 CC lib/iscsi/tgt_node.o 00:06:44.361 CC lib/iscsi/portal_grp.o 00:06:44.361 CC lib/iscsi/iscsi_subsystem.o 00:06:44.361 CC lib/iscsi/iscsi_rpc.o 00:06:44.361 CC lib/iscsi/task.o 00:06:44.618 SO libspdk_ftl.so.9.0 00:06:44.877 SYMLINK libspdk_ftl.so 00:06:45.136 LIB libspdk_vhost.a 00:06:45.394 LIB libspdk_nvmf.a 00:06:45.394 SO libspdk_vhost.so.8.0 00:06:45.394 SO libspdk_nvmf.so.20.0 00:06:45.394 SYMLINK libspdk_vhost.so 00:06:45.394 LIB libspdk_iscsi.a 00:06:45.394 SO libspdk_iscsi.so.8.0 00:06:45.654 SYMLINK libspdk_nvmf.so 00:06:45.654 SYMLINK libspdk_iscsi.so 00:06:46.221 CC module/env_dpdk/env_dpdk_rpc.o 00:06:46.221 CC module/vfu_device/vfu_virtio.o 00:06:46.221 CC module/vfu_device/vfu_virtio_blk.o 00:06:46.221 CC module/vfu_device/vfu_virtio_rpc.o 00:06:46.221 CC module/vfu_device/vfu_virtio_scsi.o 00:06:46.221 CC module/vfu_device/vfu_virtio_fs.o 00:06:46.221 CC module/sock/posix/posix.o 00:06:46.221 CC module/accel/iaa/accel_iaa.o 00:06:46.221 CC module/accel/iaa/accel_iaa_rpc.o 00:06:46.221 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:46.221 LIB libspdk_env_dpdk_rpc.a 00:06:46.221 CC module/accel/ioat/accel_ioat.o 00:06:46.221 CC module/keyring/file/keyring.o 00:06:46.221 CC module/accel/ioat/accel_ioat_rpc.o 00:06:46.221 CC module/keyring/file/keyring_rpc.o 00:06:46.221 CC module/fsdev/aio/fsdev_aio.o 00:06:46.221 CC module/keyring/linux/keyring.o 00:06:46.221 CC module/blob/bdev/blob_bdev.o 00:06:46.221 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:46.221 CC module/keyring/linux/keyring_rpc.o 00:06:46.221 CC module/accel/error/accel_error.o 00:06:46.221 CC module/fsdev/aio/linux_aio_mgr.o 00:06:46.221 CC module/accel/error/accel_error_rpc.o 00:06:46.221 CC module/scheduler/gscheduler/gscheduler.o 00:06:46.221 CC module/accel/dsa/accel_dsa.o 00:06:46.221 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:46.221 CC module/accel/dsa/accel_dsa_rpc.o 00:06:46.221 SO libspdk_env_dpdk_rpc.so.6.0 00:06:46.479 SYMLINK libspdk_env_dpdk_rpc.so 00:06:46.479 LIB libspdk_keyring_file.a 00:06:46.479 LIB libspdk_keyring_linux.a 00:06:46.479 SO libspdk_keyring_file.so.2.0 00:06:46.479 LIB libspdk_scheduler_dpdk_governor.a 00:06:46.479 LIB libspdk_accel_ioat.a 00:06:46.479 LIB libspdk_scheduler_gscheduler.a 00:06:46.479 SO libspdk_keyring_linux.so.1.0 00:06:46.479 LIB libspdk_accel_iaa.a 00:06:46.479 SO libspdk_scheduler_gscheduler.so.4.0 00:06:46.479 LIB libspdk_scheduler_dynamic.a 00:06:46.479 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:46.479 SO libspdk_accel_ioat.so.6.0 00:06:46.479 LIB libspdk_accel_error.a 00:06:46.479 SO libspdk_accel_iaa.so.3.0 00:06:46.479 SYMLINK libspdk_keyring_file.so 00:06:46.479 SO libspdk_scheduler_dynamic.so.4.0 00:06:46.479 SYMLINK libspdk_keyring_linux.so 00:06:46.479 SO libspdk_accel_error.so.2.0 00:06:46.479 LIB libspdk_blob_bdev.a 00:06:46.479 SYMLINK libspdk_accel_ioat.so 00:06:46.479 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:46.479 SYMLINK libspdk_scheduler_gscheduler.so 00:06:46.479 LIB libspdk_accel_dsa.a 00:06:46.479 SYMLINK libspdk_accel_iaa.so 00:06:46.479 SO libspdk_blob_bdev.so.12.0 00:06:46.479 SYMLINK libspdk_scheduler_dynamic.so 00:06:46.479 SYMLINK libspdk_accel_error.so 00:06:46.737 SO libspdk_accel_dsa.so.5.0 00:06:46.737 SYMLINK libspdk_blob_bdev.so 00:06:46.737 LIB libspdk_vfu_device.a 00:06:46.737 SYMLINK libspdk_accel_dsa.so 00:06:46.737 SO libspdk_vfu_device.so.3.0 00:06:46.737 SYMLINK libspdk_vfu_device.so 00:06:46.737 LIB libspdk_fsdev_aio.a 00:06:46.737 LIB libspdk_sock_posix.a 00:06:46.996 SO libspdk_fsdev_aio.so.1.0 00:06:46.996 SO libspdk_sock_posix.so.6.0 00:06:46.996 SYMLINK libspdk_fsdev_aio.so 00:06:46.996 SYMLINK libspdk_sock_posix.so 00:06:46.996 CC module/blobfs/bdev/blobfs_bdev.o 00:06:46.996 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:46.996 CC module/bdev/delay/vbdev_delay.o 00:06:46.996 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:46.996 CC module/bdev/gpt/gpt.o 00:06:46.996 CC module/bdev/gpt/vbdev_gpt.o 00:06:46.996 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:46.996 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:46.996 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:46.996 CC module/bdev/malloc/bdev_malloc.o 00:06:46.996 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:46.996 CC module/bdev/error/vbdev_error.o 00:06:46.996 CC module/bdev/nvme/bdev_nvme.o 00:06:46.996 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:46.996 CC module/bdev/null/bdev_null.o 00:06:46.996 CC module/bdev/error/vbdev_error_rpc.o 00:06:46.996 CC module/bdev/nvme/nvme_rpc.o 00:06:46.996 CC module/bdev/nvme/bdev_mdns_client.o 00:06:46.996 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:46.996 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:46.996 CC module/bdev/raid/bdev_raid.o 00:06:46.996 CC module/bdev/nvme/vbdev_opal.o 00:06:46.996 CC module/bdev/null/bdev_null_rpc.o 00:06:46.996 CC module/bdev/raid/bdev_raid_rpc.o 00:06:47.254 CC module/bdev/raid/bdev_raid_sb.o 00:06:47.254 CC module/bdev/raid/raid0.o 00:06:47.254 CC module/bdev/raid/raid1.o 00:06:47.254 CC module/bdev/raid/concat.o 00:06:47.254 CC module/bdev/split/vbdev_split.o 00:06:47.254 CC module/bdev/split/vbdev_split_rpc.o 00:06:47.254 CC module/bdev/aio/bdev_aio_rpc.o 00:06:47.254 CC module/bdev/aio/bdev_aio.o 00:06:47.254 CC module/bdev/iscsi/bdev_iscsi.o 00:06:47.254 CC module/bdev/passthru/vbdev_passthru.o 00:06:47.254 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:47.254 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:47.254 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:47.254 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:47.254 CC module/bdev/ftl/bdev_ftl.o 00:06:47.254 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:47.254 CC module/bdev/lvol/vbdev_lvol.o 00:06:47.254 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:47.254 LIB libspdk_blobfs_bdev.a 00:06:47.513 SO libspdk_blobfs_bdev.so.6.0 00:06:47.513 LIB libspdk_bdev_gpt.a 00:06:47.513 LIB libspdk_bdev_split.a 00:06:47.513 LIB libspdk_bdev_null.a 00:06:47.513 SO libspdk_bdev_split.so.6.0 00:06:47.513 SYMLINK libspdk_blobfs_bdev.so 00:06:47.513 SO libspdk_bdev_gpt.so.6.0 00:06:47.513 LIB libspdk_bdev_ftl.a 00:06:47.513 LIB libspdk_bdev_error.a 00:06:47.513 SO libspdk_bdev_null.so.6.0 00:06:47.513 LIB libspdk_bdev_passthru.a 00:06:47.513 LIB libspdk_bdev_aio.a 00:06:47.513 SYMLINK libspdk_bdev_gpt.so 00:06:47.513 SO libspdk_bdev_ftl.so.6.0 00:06:47.513 LIB libspdk_bdev_malloc.a 00:06:47.513 SO libspdk_bdev_error.so.6.0 00:06:47.513 SO libspdk_bdev_passthru.so.6.0 00:06:47.513 SYMLINK libspdk_bdev_split.so 00:06:47.513 LIB libspdk_bdev_iscsi.a 00:06:47.513 LIB libspdk_bdev_zone_block.a 00:06:47.513 SO libspdk_bdev_aio.so.6.0 00:06:47.513 SYMLINK libspdk_bdev_null.so 00:06:47.513 LIB libspdk_bdev_delay.a 00:06:47.513 SO libspdk_bdev_malloc.so.6.0 00:06:47.513 SO libspdk_bdev_zone_block.so.6.0 00:06:47.513 SO libspdk_bdev_iscsi.so.6.0 00:06:47.513 SYMLINK libspdk_bdev_ftl.so 00:06:47.513 SO libspdk_bdev_delay.so.6.0 00:06:47.513 SYMLINK libspdk_bdev_error.so 00:06:47.513 SYMLINK libspdk_bdev_passthru.so 00:06:47.513 SYMLINK libspdk_bdev_aio.so 00:06:47.513 SYMLINK libspdk_bdev_malloc.so 00:06:47.513 SYMLINK libspdk_bdev_iscsi.so 00:06:47.513 SYMLINK libspdk_bdev_zone_block.so 00:06:47.772 SYMLINK libspdk_bdev_delay.so 00:06:47.772 LIB libspdk_bdev_lvol.a 00:06:47.772 LIB libspdk_bdev_virtio.a 00:06:47.772 SO libspdk_bdev_lvol.so.6.0 00:06:47.772 SO libspdk_bdev_virtio.so.6.0 00:06:47.772 SYMLINK libspdk_bdev_lvol.so 00:06:47.772 SYMLINK libspdk_bdev_virtio.so 00:06:48.032 LIB libspdk_bdev_raid.a 00:06:48.032 SO libspdk_bdev_raid.so.6.0 00:06:48.032 SYMLINK libspdk_bdev_raid.so 00:06:48.968 LIB libspdk_bdev_nvme.a 00:06:48.968 SO libspdk_bdev_nvme.so.7.1 00:06:49.227 SYMLINK libspdk_bdev_nvme.so 00:06:49.794 CC module/event/subsystems/iobuf/iobuf.o 00:06:49.794 CC module/event/subsystems/keyring/keyring.o 00:06:49.794 CC module/event/subsystems/vmd/vmd.o 00:06:49.794 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:49.794 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:49.794 CC module/event/subsystems/scheduler/scheduler.o 00:06:49.794 CC module/event/subsystems/sock/sock.o 00:06:49.794 CC module/event/subsystems/fsdev/fsdev.o 00:06:49.794 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:06:49.794 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:50.053 LIB libspdk_event_fsdev.a 00:06:50.053 LIB libspdk_event_scheduler.a 00:06:50.053 LIB libspdk_event_keyring.a 00:06:50.053 LIB libspdk_event_vhost_blk.a 00:06:50.053 LIB libspdk_event_vmd.a 00:06:50.053 LIB libspdk_event_sock.a 00:06:50.053 LIB libspdk_event_vfu_tgt.a 00:06:50.053 LIB libspdk_event_iobuf.a 00:06:50.053 SO libspdk_event_fsdev.so.1.0 00:06:50.053 SO libspdk_event_scheduler.so.4.0 00:06:50.053 SO libspdk_event_vhost_blk.so.3.0 00:06:50.053 SO libspdk_event_keyring.so.1.0 00:06:50.053 SO libspdk_event_sock.so.5.0 00:06:50.053 SO libspdk_event_vfu_tgt.so.3.0 00:06:50.053 SO libspdk_event_vmd.so.6.0 00:06:50.053 SO libspdk_event_iobuf.so.3.0 00:06:50.053 SYMLINK libspdk_event_fsdev.so 00:06:50.053 SYMLINK libspdk_event_scheduler.so 00:06:50.053 SYMLINK libspdk_event_vhost_blk.so 00:06:50.053 SYMLINK libspdk_event_keyring.so 00:06:50.053 SYMLINK libspdk_event_vfu_tgt.so 00:06:50.053 SYMLINK libspdk_event_sock.so 00:06:50.053 SYMLINK libspdk_event_vmd.so 00:06:50.053 SYMLINK libspdk_event_iobuf.so 00:06:50.311 CC module/event/subsystems/accel/accel.o 00:06:50.569 LIB libspdk_event_accel.a 00:06:50.569 SO libspdk_event_accel.so.6.0 00:06:50.569 SYMLINK libspdk_event_accel.so 00:06:50.827 CC module/event/subsystems/bdev/bdev.o 00:06:51.086 LIB libspdk_event_bdev.a 00:06:51.086 SO libspdk_event_bdev.so.6.0 00:06:51.086 SYMLINK libspdk_event_bdev.so 00:06:51.711 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:51.711 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:51.711 CC module/event/subsystems/nbd/nbd.o 00:06:51.711 CC module/event/subsystems/scsi/scsi.o 00:06:51.711 CC module/event/subsystems/ublk/ublk.o 00:06:51.711 LIB libspdk_event_ublk.a 00:06:51.711 LIB libspdk_event_nbd.a 00:06:51.711 LIB libspdk_event_scsi.a 00:06:51.711 SO libspdk_event_ublk.so.3.0 00:06:51.711 SO libspdk_event_nbd.so.6.0 00:06:51.711 SO libspdk_event_scsi.so.6.0 00:06:51.711 LIB libspdk_event_nvmf.a 00:06:51.711 SO libspdk_event_nvmf.so.6.0 00:06:51.711 SYMLINK libspdk_event_ublk.so 00:06:51.711 SYMLINK libspdk_event_nbd.so 00:06:51.711 SYMLINK libspdk_event_scsi.so 00:06:51.711 SYMLINK libspdk_event_nvmf.so 00:06:51.970 CC module/event/subsystems/iscsi/iscsi.o 00:06:51.970 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:52.228 LIB libspdk_event_iscsi.a 00:06:52.228 LIB libspdk_event_vhost_scsi.a 00:06:52.228 SO libspdk_event_iscsi.so.6.0 00:06:52.228 SO libspdk_event_vhost_scsi.so.3.0 00:06:52.228 SYMLINK libspdk_event_iscsi.so 00:06:52.228 SYMLINK libspdk_event_vhost_scsi.so 00:06:52.488 SO libspdk.so.6.0 00:06:52.488 SYMLINK libspdk.so 00:06:52.744 CXX app/trace/trace.o 00:06:52.744 TEST_HEADER include/spdk/accel_module.h 00:06:52.744 TEST_HEADER include/spdk/accel.h 00:06:52.744 TEST_HEADER include/spdk/assert.h 00:06:52.744 TEST_HEADER include/spdk/barrier.h 00:06:52.744 TEST_HEADER include/spdk/base64.h 00:06:52.744 CC app/trace_record/trace_record.o 00:06:52.744 CC app/spdk_nvme_perf/perf.o 00:06:52.744 TEST_HEADER include/spdk/bdev.h 00:06:52.744 TEST_HEADER include/spdk/bdev_module.h 00:06:52.744 TEST_HEADER include/spdk/bdev_zone.h 00:06:52.744 CC app/spdk_lspci/spdk_lspci.o 00:06:52.744 CC test/rpc_client/rpc_client_test.o 00:06:52.744 TEST_HEADER include/spdk/bit_array.h 00:06:52.744 TEST_HEADER include/spdk/bit_pool.h 00:06:52.744 TEST_HEADER include/spdk/blob_bdev.h 00:06:52.744 CC app/spdk_top/spdk_top.o 00:06:52.744 TEST_HEADER include/spdk/blobfs.h 00:06:52.744 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:52.744 TEST_HEADER include/spdk/conf.h 00:06:52.744 TEST_HEADER include/spdk/blob.h 00:06:52.744 TEST_HEADER include/spdk/config.h 00:06:52.744 TEST_HEADER include/spdk/crc32.h 00:06:52.744 TEST_HEADER include/spdk/cpuset.h 00:06:52.744 CC app/spdk_nvme_discover/discovery_aer.o 00:06:52.744 TEST_HEADER include/spdk/crc64.h 00:06:52.744 TEST_HEADER include/spdk/dif.h 00:06:52.744 TEST_HEADER include/spdk/crc16.h 00:06:52.744 TEST_HEADER include/spdk/endian.h 00:06:52.744 TEST_HEADER include/spdk/dma.h 00:06:52.744 TEST_HEADER include/spdk/env.h 00:06:52.744 CC app/spdk_nvme_identify/identify.o 00:06:52.744 TEST_HEADER include/spdk/env_dpdk.h 00:06:52.744 TEST_HEADER include/spdk/fd.h 00:06:52.744 TEST_HEADER include/spdk/event.h 00:06:52.744 TEST_HEADER include/spdk/fd_group.h 00:06:52.744 TEST_HEADER include/spdk/file.h 00:06:52.744 TEST_HEADER include/spdk/fsdev.h 00:06:52.744 TEST_HEADER include/spdk/fsdev_module.h 00:06:52.744 TEST_HEADER include/spdk/ftl.h 00:06:52.744 TEST_HEADER include/spdk/hexlify.h 00:06:52.744 TEST_HEADER include/spdk/gpt_spec.h 00:06:52.744 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:52.744 TEST_HEADER include/spdk/idxd.h 00:06:52.744 TEST_HEADER include/spdk/histogram_data.h 00:06:52.744 TEST_HEADER include/spdk/idxd_spec.h 00:06:52.744 TEST_HEADER include/spdk/init.h 00:06:52.744 TEST_HEADER include/spdk/ioat.h 00:06:52.744 TEST_HEADER include/spdk/ioat_spec.h 00:06:52.745 TEST_HEADER include/spdk/iscsi_spec.h 00:06:52.745 TEST_HEADER include/spdk/json.h 00:06:53.012 TEST_HEADER include/spdk/jsonrpc.h 00:06:53.012 TEST_HEADER include/spdk/keyring_module.h 00:06:53.012 TEST_HEADER include/spdk/likely.h 00:06:53.012 TEST_HEADER include/spdk/keyring.h 00:06:53.012 TEST_HEADER include/spdk/lvol.h 00:06:53.012 TEST_HEADER include/spdk/log.h 00:06:53.012 TEST_HEADER include/spdk/memory.h 00:06:53.012 TEST_HEADER include/spdk/md5.h 00:06:53.012 TEST_HEADER include/spdk/mmio.h 00:06:53.012 TEST_HEADER include/spdk/nbd.h 00:06:53.012 TEST_HEADER include/spdk/net.h 00:06:53.012 TEST_HEADER include/spdk/notify.h 00:06:53.012 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:53.012 TEST_HEADER include/spdk/nvme.h 00:06:53.012 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:53.012 TEST_HEADER include/spdk/nvme_intel.h 00:06:53.012 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:53.012 TEST_HEADER include/spdk/nvme_spec.h 00:06:53.012 TEST_HEADER include/spdk/nvme_zns.h 00:06:53.012 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:53.012 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:53.012 CC app/iscsi_tgt/iscsi_tgt.o 00:06:53.012 TEST_HEADER include/spdk/nvmf.h 00:06:53.012 TEST_HEADER include/spdk/nvmf_spec.h 00:06:53.012 TEST_HEADER include/spdk/nvmf_transport.h 00:06:53.012 TEST_HEADER include/spdk/opal_spec.h 00:06:53.012 TEST_HEADER include/spdk/pci_ids.h 00:06:53.012 CC app/spdk_dd/spdk_dd.o 00:06:53.012 TEST_HEADER include/spdk/opal.h 00:06:53.012 TEST_HEADER include/spdk/pipe.h 00:06:53.012 TEST_HEADER include/spdk/reduce.h 00:06:53.012 TEST_HEADER include/spdk/rpc.h 00:06:53.012 TEST_HEADER include/spdk/queue.h 00:06:53.012 TEST_HEADER include/spdk/scsi.h 00:06:53.012 TEST_HEADER include/spdk/scheduler.h 00:06:53.012 TEST_HEADER include/spdk/sock.h 00:06:53.012 TEST_HEADER include/spdk/scsi_spec.h 00:06:53.012 TEST_HEADER include/spdk/stdinc.h 00:06:53.012 TEST_HEADER include/spdk/trace.h 00:06:53.012 TEST_HEADER include/spdk/string.h 00:06:53.012 TEST_HEADER include/spdk/trace_parser.h 00:06:53.012 TEST_HEADER include/spdk/thread.h 00:06:53.012 TEST_HEADER include/spdk/tree.h 00:06:53.012 TEST_HEADER include/spdk/ublk.h 00:06:53.012 TEST_HEADER include/spdk/util.h 00:06:53.012 TEST_HEADER include/spdk/uuid.h 00:06:53.012 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:53.012 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:53.012 CC app/nvmf_tgt/nvmf_main.o 00:06:53.012 TEST_HEADER include/spdk/vhost.h 00:06:53.012 TEST_HEADER include/spdk/version.h 00:06:53.012 TEST_HEADER include/spdk/vmd.h 00:06:53.012 TEST_HEADER include/spdk/xor.h 00:06:53.012 TEST_HEADER include/spdk/zipf.h 00:06:53.012 CXX test/cpp_headers/accel.o 00:06:53.012 CXX test/cpp_headers/accel_module.o 00:06:53.012 CC app/spdk_tgt/spdk_tgt.o 00:06:53.012 CXX test/cpp_headers/assert.o 00:06:53.012 CXX test/cpp_headers/barrier.o 00:06:53.012 CXX test/cpp_headers/base64.o 00:06:53.012 CXX test/cpp_headers/bdev_zone.o 00:06:53.012 CXX test/cpp_headers/bdev.o 00:06:53.012 CXX test/cpp_headers/bit_pool.o 00:06:53.012 CXX test/cpp_headers/bdev_module.o 00:06:53.012 CXX test/cpp_headers/bit_array.o 00:06:53.012 CXX test/cpp_headers/blobfs_bdev.o 00:06:53.012 CXX test/cpp_headers/blobfs.o 00:06:53.012 CXX test/cpp_headers/blob.o 00:06:53.012 CXX test/cpp_headers/blob_bdev.o 00:06:53.012 CXX test/cpp_headers/config.o 00:06:53.012 CXX test/cpp_headers/cpuset.o 00:06:53.012 CXX test/cpp_headers/conf.o 00:06:53.012 CXX test/cpp_headers/crc32.o 00:06:53.012 CXX test/cpp_headers/dif.o 00:06:53.012 CXX test/cpp_headers/crc16.o 00:06:53.012 CXX test/cpp_headers/endian.o 00:06:53.012 CXX test/cpp_headers/crc64.o 00:06:53.012 CXX test/cpp_headers/env.o 00:06:53.012 CXX test/cpp_headers/env_dpdk.o 00:06:53.012 CXX test/cpp_headers/event.o 00:06:53.012 CXX test/cpp_headers/dma.o 00:06:53.012 CXX test/cpp_headers/file.o 00:06:53.012 CXX test/cpp_headers/fsdev.o 00:06:53.012 CXX test/cpp_headers/fd_group.o 00:06:53.012 CXX test/cpp_headers/fd.o 00:06:53.012 CXX test/cpp_headers/ftl.o 00:06:53.012 CXX test/cpp_headers/fsdev_module.o 00:06:53.012 CXX test/cpp_headers/gpt_spec.o 00:06:53.012 CXX test/cpp_headers/fuse_dispatcher.o 00:06:53.012 CXX test/cpp_headers/hexlify.o 00:06:53.012 CXX test/cpp_headers/idxd.o 00:06:53.012 CXX test/cpp_headers/idxd_spec.o 00:06:53.012 CXX test/cpp_headers/histogram_data.o 00:06:53.012 CXX test/cpp_headers/ioat_spec.o 00:06:53.012 CXX test/cpp_headers/init.o 00:06:53.012 CXX test/cpp_headers/json.o 00:06:53.012 CXX test/cpp_headers/ioat.o 00:06:53.012 CXX test/cpp_headers/jsonrpc.o 00:06:53.012 CXX test/cpp_headers/keyring.o 00:06:53.012 CXX test/cpp_headers/iscsi_spec.o 00:06:53.012 CXX test/cpp_headers/keyring_module.o 00:06:53.012 CXX test/cpp_headers/likely.o 00:06:53.012 CXX test/cpp_headers/log.o 00:06:53.012 CXX test/cpp_headers/lvol.o 00:06:53.012 CXX test/cpp_headers/md5.o 00:06:53.012 CXX test/cpp_headers/mmio.o 00:06:53.012 CXX test/cpp_headers/memory.o 00:06:53.012 CXX test/cpp_headers/nbd.o 00:06:53.012 CXX test/cpp_headers/net.o 00:06:53.012 CXX test/cpp_headers/notify.o 00:06:53.012 CXX test/cpp_headers/nvme.o 00:06:53.012 CXX test/cpp_headers/nvme_intel.o 00:06:53.012 CXX test/cpp_headers/nvme_ocssd.o 00:06:53.012 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:53.012 CXX test/cpp_headers/nvme_zns.o 00:06:53.012 CXX test/cpp_headers/nvme_spec.o 00:06:53.012 CXX test/cpp_headers/nvmf_cmd.o 00:06:53.012 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:53.012 CXX test/cpp_headers/nvmf_spec.o 00:06:53.012 CXX test/cpp_headers/nvmf.o 00:06:53.012 CXX test/cpp_headers/nvmf_transport.o 00:06:53.012 CXX test/cpp_headers/opal.o 00:06:53.012 CC examples/ioat/perf/perf.o 00:06:53.012 CC examples/ioat/verify/verify.o 00:06:53.012 CC test/env/vtophys/vtophys.o 00:06:53.012 CC test/app/histogram_perf/histogram_perf.o 00:06:53.012 CC test/app/stub/stub.o 00:06:53.012 CC test/thread/poller_perf/poller_perf.o 00:06:53.012 CC test/app/jsoncat/jsoncat.o 00:06:53.012 CC app/fio/nvme/fio_plugin.o 00:06:53.012 CC test/app/bdev_svc/bdev_svc.o 00:06:53.012 CXX test/cpp_headers/opal_spec.o 00:06:53.012 CC examples/util/zipf/zipf.o 00:06:53.012 CC test/env/memory/memory_ut.o 00:06:53.012 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:53.012 LINK spdk_lspci 00:06:53.285 CC test/env/pci/pci_ut.o 00:06:53.285 CC test/dma/test_dma/test_dma.o 00:06:53.285 CC app/fio/bdev/fio_plugin.o 00:06:53.550 LINK iscsi_tgt 00:06:53.550 CC test/env/mem_callbacks/mem_callbacks.o 00:06:53.550 LINK vtophys 00:06:53.550 LINK histogram_perf 00:06:53.550 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:53.550 CXX test/cpp_headers/pci_ids.o 00:06:53.550 CXX test/cpp_headers/pipe.o 00:06:53.550 CXX test/cpp_headers/queue.o 00:06:53.550 LINK rpc_client_test 00:06:53.550 CXX test/cpp_headers/reduce.o 00:06:53.550 CXX test/cpp_headers/rpc.o 00:06:53.550 CXX test/cpp_headers/scheduler.o 00:06:53.550 CXX test/cpp_headers/scsi.o 00:06:53.550 LINK interrupt_tgt 00:06:53.550 LINK spdk_tgt 00:06:53.550 CXX test/cpp_headers/scsi_spec.o 00:06:53.550 LINK stub 00:06:53.550 CXX test/cpp_headers/sock.o 00:06:53.550 CXX test/cpp_headers/stdinc.o 00:06:53.550 CXX test/cpp_headers/thread.o 00:06:53.550 CXX test/cpp_headers/trace.o 00:06:53.550 CXX test/cpp_headers/string.o 00:06:53.550 CXX test/cpp_headers/trace_parser.o 00:06:53.550 LINK spdk_nvme_discover 00:06:53.550 CXX test/cpp_headers/ublk.o 00:06:53.550 CXX test/cpp_headers/util.o 00:06:53.550 CXX test/cpp_headers/uuid.o 00:06:53.550 CXX test/cpp_headers/tree.o 00:06:53.550 CXX test/cpp_headers/version.o 00:06:53.550 CXX test/cpp_headers/vhost.o 00:06:53.550 LINK bdev_svc 00:06:53.550 CXX test/cpp_headers/vfio_user_pci.o 00:06:53.550 CXX test/cpp_headers/xor.o 00:06:53.550 CXX test/cpp_headers/vfio_user_spec.o 00:06:53.550 CXX test/cpp_headers/vmd.o 00:06:53.550 LINK ioat_perf 00:06:53.550 LINK verify 00:06:53.550 CXX test/cpp_headers/zipf.o 00:06:53.550 LINK nvmf_tgt 00:06:53.808 LINK spdk_dd 00:06:53.808 LINK spdk_trace_record 00:06:53.808 LINK poller_perf 00:06:53.808 LINK jsoncat 00:06:53.808 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:53.808 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:53.808 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:53.808 LINK zipf 00:06:53.808 LINK env_dpdk_post_init 00:06:54.066 LINK spdk_trace 00:06:54.066 LINK pci_ut 00:06:54.066 LINK spdk_bdev 00:06:54.066 LINK spdk_nvme 00:06:54.066 LINK test_dma 00:06:54.066 CC test/event/event_perf/event_perf.o 00:06:54.066 CC test/event/reactor/reactor.o 00:06:54.066 CC test/event/reactor_perf/reactor_perf.o 00:06:54.066 CC test/event/app_repeat/app_repeat.o 00:06:54.066 LINK nvme_fuzz 00:06:54.066 CC examples/idxd/perf/perf.o 00:06:54.066 LINK spdk_top 00:06:54.066 CC examples/sock/hello_world/hello_sock.o 00:06:54.066 CC examples/vmd/lsvmd/lsvmd.o 00:06:54.066 CC examples/vmd/led/led.o 00:06:54.066 CC test/event/scheduler/scheduler.o 00:06:54.324 LINK vhost_fuzz 00:06:54.324 LINK spdk_nvme_perf 00:06:54.324 CC examples/thread/thread/thread_ex.o 00:06:54.324 LINK reactor 00:06:54.324 LINK mem_callbacks 00:06:54.324 LINK spdk_nvme_identify 00:06:54.324 CC app/vhost/vhost.o 00:06:54.324 LINK reactor_perf 00:06:54.324 LINK event_perf 00:06:54.324 LINK lsvmd 00:06:54.324 LINK app_repeat 00:06:54.324 LINK led 00:06:54.324 LINK hello_sock 00:06:54.324 LINK scheduler 00:06:54.582 LINK idxd_perf 00:06:54.582 LINK thread 00:06:54.582 LINK vhost 00:06:54.582 LINK memory_ut 00:06:54.582 CC test/nvme/e2edp/nvme_dp.o 00:06:54.582 CC test/nvme/reset/reset.o 00:06:54.582 CC test/nvme/err_injection/err_injection.o 00:06:54.582 CC test/nvme/simple_copy/simple_copy.o 00:06:54.582 CC test/nvme/aer/aer.o 00:06:54.582 CC test/nvme/fused_ordering/fused_ordering.o 00:06:54.582 CC test/nvme/reserve/reserve.o 00:06:54.582 CC test/nvme/startup/startup.o 00:06:54.582 CC test/nvme/boot_partition/boot_partition.o 00:06:54.582 CC test/nvme/overhead/overhead.o 00:06:54.582 CC test/nvme/connect_stress/connect_stress.o 00:06:54.582 CC test/nvme/fdp/fdp.o 00:06:54.582 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:54.582 CC test/nvme/compliance/nvme_compliance.o 00:06:54.582 CC test/nvme/sgl/sgl.o 00:06:54.582 CC test/nvme/cuse/cuse.o 00:06:54.582 CC test/accel/dif/dif.o 00:06:54.582 CC test/blobfs/mkfs/mkfs.o 00:06:54.841 CC test/lvol/esnap/esnap.o 00:06:54.841 LINK startup 00:06:54.841 CC examples/nvme/abort/abort.o 00:06:54.841 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:54.841 CC examples/nvme/hello_world/hello_world.o 00:06:54.841 LINK boot_partition 00:06:54.841 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:54.841 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:54.841 CC examples/nvme/hotplug/hotplug.o 00:06:54.841 CC examples/nvme/arbitration/arbitration.o 00:06:54.841 LINK err_injection 00:06:54.841 CC examples/nvme/reconnect/reconnect.o 00:06:54.841 LINK doorbell_aers 00:06:54.841 LINK connect_stress 00:06:54.841 LINK fused_ordering 00:06:54.841 LINK reserve 00:06:54.841 LINK simple_copy 00:06:54.841 LINK nvme_dp 00:06:54.841 LINK aer 00:06:54.841 LINK sgl 00:06:54.841 LINK mkfs 00:06:54.841 LINK reset 00:06:55.100 LINK overhead 00:06:55.100 LINK nvme_compliance 00:06:55.100 LINK fdp 00:06:55.100 CC examples/accel/perf/accel_perf.o 00:06:55.100 LINK pmr_persistence 00:06:55.100 CC examples/blob/hello_world/hello_blob.o 00:06:55.100 CC examples/blob/cli/blobcli.o 00:06:55.100 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:55.100 LINK cmb_copy 00:06:55.100 LINK hello_world 00:06:55.100 LINK hotplug 00:06:55.100 LINK arbitration 00:06:55.100 LINK abort 00:06:55.358 LINK iscsi_fuzz 00:06:55.358 LINK reconnect 00:06:55.358 LINK dif 00:06:55.358 LINK nvme_manage 00:06:55.358 LINK hello_blob 00:06:55.358 LINK hello_fsdev 00:06:55.358 LINK accel_perf 00:06:55.358 LINK blobcli 00:06:55.926 LINK cuse 00:06:55.926 CC test/bdev/bdevio/bdevio.o 00:06:55.926 CC examples/bdev/hello_world/hello_bdev.o 00:06:55.926 CC examples/bdev/bdevperf/bdevperf.o 00:06:56.185 LINK bdevio 00:06:56.185 LINK hello_bdev 00:06:56.444 LINK bdevperf 00:06:57.012 CC examples/nvmf/nvmf/nvmf.o 00:06:57.271 LINK nvmf 00:06:58.648 LINK esnap 00:06:58.648 00:06:58.648 real 0m55.908s 00:06:58.648 user 8m18.029s 00:06:58.648 sys 3m42.559s 00:06:58.648 13:38:41 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:58.648 13:38:41 make -- common/autotest_common.sh@10 -- $ set +x 00:06:58.648 ************************************ 00:06:58.648 END TEST make 00:06:58.648 ************************************ 00:06:58.648 13:38:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:58.648 13:38:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:58.648 13:38:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:58.648 13:38:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:58.648 13:38:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:58.648 13:38:41 -- pm/common@44 -- $ pid=390967 00:06:58.648 13:38:41 -- pm/common@50 -- $ kill -TERM 390967 00:06:58.648 13:38:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:58.648 13:38:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:58.648 13:38:41 -- pm/common@44 -- $ pid=390968 00:06:58.648 13:38:41 -- pm/common@50 -- $ kill -TERM 390968 00:06:58.648 13:38:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:58.648 13:38:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:58.648 13:38:41 -- pm/common@44 -- $ pid=390970 00:06:58.648 13:38:41 -- pm/common@50 -- $ kill -TERM 390970 00:06:58.648 13:38:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:58.648 13:38:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:58.648 13:38:41 -- pm/common@44 -- $ pid=390994 00:06:58.648 13:38:41 -- pm/common@50 -- $ sudo -E kill -TERM 390994 00:06:58.648 13:38:41 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:58.648 13:38:41 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:58.908 13:38:41 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:58.908 13:38:41 -- common/autotest_common.sh@1711 -- # lcov --version 00:06:58.908 13:38:41 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:58.908 13:38:41 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:58.908 13:38:41 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.908 13:38:41 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.908 13:38:41 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.908 13:38:41 -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.908 13:38:41 -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.908 13:38:41 -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.908 13:38:41 -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.908 13:38:41 -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.908 13:38:41 -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.908 13:38:41 -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.908 13:38:41 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.908 13:38:41 -- scripts/common.sh@344 -- # case "$op" in 00:06:58.908 13:38:41 -- scripts/common.sh@345 -- # : 1 00:06:58.908 13:38:41 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.908 13:38:41 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.908 13:38:41 -- scripts/common.sh@365 -- # decimal 1 00:06:58.908 13:38:41 -- scripts/common.sh@353 -- # local d=1 00:06:58.908 13:38:41 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.908 13:38:41 -- scripts/common.sh@355 -- # echo 1 00:06:58.908 13:38:41 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.908 13:38:41 -- scripts/common.sh@366 -- # decimal 2 00:06:58.908 13:38:41 -- scripts/common.sh@353 -- # local d=2 00:06:58.908 13:38:41 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.908 13:38:41 -- scripts/common.sh@355 -- # echo 2 00:06:58.908 13:38:41 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.908 13:38:41 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.908 13:38:41 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.908 13:38:41 -- scripts/common.sh@368 -- # return 0 00:06:58.908 13:38:41 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.908 13:38:41 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:58.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.908 --rc genhtml_branch_coverage=1 00:06:58.908 --rc genhtml_function_coverage=1 00:06:58.908 --rc genhtml_legend=1 00:06:58.908 --rc geninfo_all_blocks=1 00:06:58.908 --rc geninfo_unexecuted_blocks=1 00:06:58.908 00:06:58.908 ' 00:06:58.908 13:38:41 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:58.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.908 --rc genhtml_branch_coverage=1 00:06:58.908 --rc genhtml_function_coverage=1 00:06:58.908 --rc genhtml_legend=1 00:06:58.908 --rc geninfo_all_blocks=1 00:06:58.908 --rc geninfo_unexecuted_blocks=1 00:06:58.908 00:06:58.908 ' 00:06:58.908 13:38:41 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:58.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.908 --rc genhtml_branch_coverage=1 00:06:58.908 --rc genhtml_function_coverage=1 00:06:58.908 --rc genhtml_legend=1 00:06:58.908 --rc geninfo_all_blocks=1 00:06:58.908 --rc geninfo_unexecuted_blocks=1 00:06:58.908 00:06:58.908 ' 00:06:58.908 13:38:41 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:58.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.908 --rc genhtml_branch_coverage=1 00:06:58.908 --rc genhtml_function_coverage=1 00:06:58.908 --rc genhtml_legend=1 00:06:58.908 --rc geninfo_all_blocks=1 00:06:58.908 --rc geninfo_unexecuted_blocks=1 00:06:58.908 00:06:58.908 ' 00:06:58.908 13:38:41 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.908 13:38:41 -- nvmf/common.sh@7 -- # uname -s 00:06:58.908 13:38:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.908 13:38:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.908 13:38:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.908 13:38:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.908 13:38:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.908 13:38:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.908 13:38:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.908 13:38:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.908 13:38:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.908 13:38:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.908 13:38:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:58.908 13:38:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:58.909 13:38:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.909 13:38:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.909 13:38:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.909 13:38:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.909 13:38:41 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.909 13:38:41 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.909 13:38:41 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.909 13:38:41 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.909 13:38:41 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.909 13:38:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.909 13:38:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.909 13:38:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.909 13:38:41 -- paths/export.sh@5 -- # export PATH 00:06:58.909 13:38:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.909 13:38:41 -- nvmf/common.sh@51 -- # : 0 00:06:58.909 13:38:41 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:58.909 13:38:41 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:58.909 13:38:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.909 13:38:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.909 13:38:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.909 13:38:41 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:58.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:58.909 13:38:41 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:58.909 13:38:41 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:58.909 13:38:41 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:58.909 13:38:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:58.909 13:38:41 -- spdk/autotest.sh@32 -- # uname -s 00:06:58.909 13:38:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:58.909 13:38:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:58.909 13:38:41 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:58.909 13:38:41 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:58.909 13:38:41 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:58.909 13:38:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:58.909 13:38:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:58.909 13:38:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:58.909 13:38:41 -- spdk/autotest.sh@48 -- # udevadm_pid=453421 00:06:58.909 13:38:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:58.909 13:38:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:58.909 13:38:41 -- pm/common@17 -- # local monitor 00:06:58.909 13:38:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:58.909 13:38:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:58.909 13:38:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:58.909 13:38:41 -- pm/common@21 -- # date +%s 00:06:58.909 13:38:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:58.909 13:38:41 -- pm/common@21 -- # date +%s 00:06:58.909 13:38:41 -- pm/common@25 -- # sleep 1 00:06:58.909 13:38:41 -- pm/common@21 -- # date +%s 00:06:58.909 13:38:41 -- pm/common@21 -- # date +%s 00:06:58.909 13:38:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402321 00:06:58.909 13:38:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402321 00:06:58.909 13:38:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402321 00:06:58.909 13:38:41 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402321 00:06:58.909 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402321_collect-cpu-load.pm.log 00:06:58.909 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402321_collect-vmstat.pm.log 00:06:58.909 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402321_collect-cpu-temp.pm.log 00:06:58.909 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402321_collect-bmc-pm.bmc.pm.log 00:06:59.847 13:38:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:59.847 13:38:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:59.847 13:38:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:59.847 13:38:42 -- common/autotest_common.sh@10 -- # set +x 00:06:59.847 13:38:42 -- spdk/autotest.sh@59 -- # create_test_list 00:06:59.847 13:38:42 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:59.847 13:38:42 -- common/autotest_common.sh@10 -- # set +x 00:07:00.106 13:38:42 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:07:00.106 13:38:42 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:00.106 13:38:42 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:00.106 13:38:42 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:00.106 13:38:42 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:00.106 13:38:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:00.106 13:38:42 -- common/autotest_common.sh@1457 -- # uname 00:07:00.106 13:38:42 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:00.106 13:38:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:00.106 13:38:42 -- common/autotest_common.sh@1477 -- # uname 00:07:00.106 13:38:42 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:00.106 13:38:42 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:00.106 13:38:42 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:00.106 lcov: LCOV version 1.15 00:07:00.106 13:38:42 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:07:12.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:12.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:27.191 13:39:07 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:27.191 13:39:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:27.191 13:39:07 -- common/autotest_common.sh@10 -- # set +x 00:07:27.192 13:39:07 -- spdk/autotest.sh@78 -- # rm -f 00:07:27.192 13:39:07 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:28.129 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:07:28.129 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:07:28.129 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:07:28.129 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:07:28.129 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:07:28.130 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:07:28.130 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:07:28.130 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:07:28.130 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:07:28.130 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:07:28.130 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:07:28.130 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:07:28.388 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:07:28.388 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:07:28.388 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:07:28.388 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:07:28.388 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:07:28.388 13:39:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:28.388 13:39:10 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:28.388 13:39:10 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:28.388 13:39:10 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:28.388 13:39:10 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:28.388 13:39:10 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:28.388 13:39:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:28.388 13:39:10 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:07:28.388 13:39:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:28.388 13:39:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:28.388 13:39:10 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:28.388 13:39:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:28.388 13:39:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:28.388 13:39:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:28.388 13:39:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:28.388 13:39:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:28.388 13:39:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:28.388 13:39:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:28.389 13:39:10 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:28.389 No valid GPT data, bailing 00:07:28.389 13:39:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:28.389 13:39:10 -- scripts/common.sh@394 -- # pt= 00:07:28.389 13:39:10 -- scripts/common.sh@395 -- # return 1 00:07:28.389 13:39:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:28.389 1+0 records in 00:07:28.389 1+0 records out 00:07:28.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00425015 s, 247 MB/s 00:07:28.389 13:39:10 -- spdk/autotest.sh@105 -- # sync 00:07:28.389 13:39:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:28.389 13:39:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:28.389 13:39:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:34.960 13:39:16 -- spdk/autotest.sh@111 -- # uname -s 00:07:34.960 13:39:16 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:34.960 13:39:16 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:34.960 13:39:16 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:36.873 Hugepages 00:07:36.873 node hugesize free / total 00:07:36.873 node0 1048576kB 0 / 0 00:07:36.873 node0 2048kB 0 / 0 00:07:36.873 node1 1048576kB 0 / 0 00:07:36.873 node1 2048kB 0 / 0 00:07:36.873 00:07:36.873 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:36.873 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:07:36.873 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:07:36.873 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:07:36.873 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:07:36.873 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:07:36.873 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:07:36.873 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:07:36.873 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:07:36.873 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:07:36.873 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:07:36.873 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:07:36.873 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:07:36.873 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:07:36.873 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:07:36.873 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:07:36.873 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:07:36.873 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:07:36.873 13:39:19 -- spdk/autotest.sh@117 -- # uname -s 00:07:36.873 13:39:19 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:36.873 13:39:19 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:36.873 13:39:19 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:40.167 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:40.167 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:40.167 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:40.167 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:40.167 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:40.167 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:40.167 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:40.167 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:40.168 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:40.168 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:40.168 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:40.168 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:40.168 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:40.168 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:40.168 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:40.168 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:41.105 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:07:41.364 13:39:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:42.302 13:39:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:42.302 13:39:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:42.302 13:39:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:42.302 13:39:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:42.302 13:39:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:42.302 13:39:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:42.302 13:39:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:42.302 13:39:24 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:42.302 13:39:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:42.302 13:39:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:07:42.302 13:39:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:07:42.302 13:39:24 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:45.595 Waiting for block devices as requested 00:07:45.595 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:07:45.595 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:45.595 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:45.595 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:45.595 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:45.595 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:45.595 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:45.860 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:45.860 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:45.860 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:45.860 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:46.203 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:46.203 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:46.203 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:46.203 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:46.547 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:46.547 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:46.547 13:39:28 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:46.547 13:39:28 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:07:46.547 13:39:28 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:07:46.547 13:39:28 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:07:46.547 13:39:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:07:46.547 13:39:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:07:46.547 13:39:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:07:46.547 13:39:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:46.547 13:39:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:46.547 13:39:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:46.547 13:39:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:46.547 13:39:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:46.547 13:39:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:46.547 13:39:29 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:07:46.547 13:39:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:46.547 13:39:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:46.547 13:39:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:46.547 13:39:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:46.547 13:39:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:46.547 13:39:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:46.547 13:39:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:46.547 13:39:29 -- common/autotest_common.sh@1543 -- # continue 00:07:46.547 13:39:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:46.547 13:39:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:46.547 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:07:46.547 13:39:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:46.547 13:39:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.547 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:07:46.547 13:39:29 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:49.876 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:49.876 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:51.252 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:07:51.252 13:39:33 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:51.252 13:39:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:51.252 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:07:51.252 13:39:33 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:51.252 13:39:33 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:51.252 13:39:33 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:51.252 13:39:33 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:51.252 13:39:33 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:51.252 13:39:33 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:51.252 13:39:33 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:51.252 13:39:33 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:51.252 13:39:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:51.252 13:39:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:51.252 13:39:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:51.252 13:39:33 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:51.252 13:39:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:51.252 13:39:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:07:51.252 13:39:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:07:51.252 13:39:33 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:51.252 13:39:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:07:51.252 13:39:33 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:07:51.252 13:39:33 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:07:51.252 13:39:33 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:07:51.252 13:39:33 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:07:51.252 13:39:33 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:07:51.252 13:39:33 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:07:51.252 13:39:33 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=468408 00:07:51.252 13:39:33 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:51.252 13:39:33 -- common/autotest_common.sh@1585 -- # waitforlisten 468408 00:07:51.252 13:39:33 -- common/autotest_common.sh@835 -- # '[' -z 468408 ']' 00:07:51.252 13:39:33 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.252 13:39:33 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.252 13:39:33 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.252 13:39:33 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.252 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:07:51.252 [2024-12-05 13:39:33.767272] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:07:51.252 [2024-12-05 13:39:33.767321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468408 ] 00:07:51.510 [2024-12-05 13:39:33.842566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.510 [2024-12-05 13:39:33.884782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.768 13:39:34 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.768 13:39:34 -- common/autotest_common.sh@868 -- # return 0 00:07:51.768 13:39:34 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:07:51.768 13:39:34 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:07:51.768 13:39:34 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:07:55.050 nvme0n1 00:07:55.050 13:39:37 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:07:55.050 [2024-12-05 13:39:37.280069] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:07:55.050 request: 00:07:55.050 { 00:07:55.050 "nvme_ctrlr_name": "nvme0", 00:07:55.050 "password": "test", 00:07:55.050 "method": "bdev_nvme_opal_revert", 00:07:55.050 "req_id": 1 00:07:55.050 } 00:07:55.050 Got JSON-RPC error response 00:07:55.050 response: 00:07:55.050 { 00:07:55.050 "code": -32602, 00:07:55.050 "message": "Invalid parameters" 00:07:55.050 } 00:07:55.050 13:39:37 -- common/autotest_common.sh@1591 -- # true 00:07:55.050 13:39:37 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:07:55.050 13:39:37 -- common/autotest_common.sh@1595 -- # killprocess 468408 00:07:55.050 13:39:37 -- common/autotest_common.sh@954 -- # '[' -z 468408 ']' 00:07:55.050 13:39:37 -- common/autotest_common.sh@958 -- # kill -0 468408 00:07:55.050 13:39:37 -- common/autotest_common.sh@959 -- # uname 00:07:55.050 13:39:37 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.050 13:39:37 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 468408 00:07:55.050 13:39:37 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.050 13:39:37 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.050 13:39:37 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 468408' 00:07:55.050 killing process with pid 468408 00:07:55.050 13:39:37 -- common/autotest_common.sh@973 -- # kill 468408 00:07:55.050 13:39:37 -- common/autotest_common.sh@978 -- # wait 468408 00:07:56.951 13:39:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:56.951 13:39:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:56.951 13:39:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:56.951 13:39:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:56.951 13:39:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:56.951 13:39:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.951 13:39:39 -- common/autotest_common.sh@10 -- # set +x 00:07:56.951 13:39:39 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:56.951 13:39:39 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:56.951 13:39:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.951 13:39:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.951 13:39:39 -- common/autotest_common.sh@10 -- # set +x 00:07:57.209 ************************************ 00:07:57.209 START TEST env 00:07:57.209 ************************************ 00:07:57.209 13:39:39 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:57.209 * Looking for test storage... 00:07:57.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:57.209 13:39:39 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:57.209 13:39:39 env -- common/autotest_common.sh@1711 -- # lcov --version 00:07:57.209 13:39:39 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:57.209 13:39:39 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:57.209 13:39:39 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.209 13:39:39 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.209 13:39:39 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.209 13:39:39 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.209 13:39:39 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.209 13:39:39 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.209 13:39:39 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.209 13:39:39 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.209 13:39:39 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.209 13:39:39 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.209 13:39:39 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.209 13:39:39 env -- scripts/common.sh@344 -- # case "$op" in 00:07:57.209 13:39:39 env -- scripts/common.sh@345 -- # : 1 00:07:57.209 13:39:39 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.209 13:39:39 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.209 13:39:39 env -- scripts/common.sh@365 -- # decimal 1 00:07:57.209 13:39:39 env -- scripts/common.sh@353 -- # local d=1 00:07:57.209 13:39:39 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.209 13:39:39 env -- scripts/common.sh@355 -- # echo 1 00:07:57.209 13:39:39 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.209 13:39:39 env -- scripts/common.sh@366 -- # decimal 2 00:07:57.209 13:39:39 env -- scripts/common.sh@353 -- # local d=2 00:07:57.209 13:39:39 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.209 13:39:39 env -- scripts/common.sh@355 -- # echo 2 00:07:57.209 13:39:39 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.209 13:39:39 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.209 13:39:39 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.209 13:39:39 env -- scripts/common.sh@368 -- # return 0 00:07:57.209 13:39:39 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.209 13:39:39 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:57.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.209 --rc genhtml_branch_coverage=1 00:07:57.209 --rc genhtml_function_coverage=1 00:07:57.209 --rc genhtml_legend=1 00:07:57.209 --rc geninfo_all_blocks=1 00:07:57.209 --rc geninfo_unexecuted_blocks=1 00:07:57.209 00:07:57.209 ' 00:07:57.209 13:39:39 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:57.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.209 --rc genhtml_branch_coverage=1 00:07:57.209 --rc genhtml_function_coverage=1 00:07:57.209 --rc genhtml_legend=1 00:07:57.209 --rc geninfo_all_blocks=1 00:07:57.209 --rc geninfo_unexecuted_blocks=1 00:07:57.209 00:07:57.209 ' 00:07:57.209 13:39:39 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:57.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.209 --rc genhtml_branch_coverage=1 00:07:57.209 --rc genhtml_function_coverage=1 00:07:57.209 --rc genhtml_legend=1 00:07:57.209 --rc geninfo_all_blocks=1 00:07:57.209 --rc geninfo_unexecuted_blocks=1 00:07:57.209 00:07:57.209 ' 00:07:57.209 13:39:39 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:57.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.209 --rc genhtml_branch_coverage=1 00:07:57.209 --rc genhtml_function_coverage=1 00:07:57.209 --rc genhtml_legend=1 00:07:57.209 --rc geninfo_all_blocks=1 00:07:57.209 --rc geninfo_unexecuted_blocks=1 00:07:57.209 00:07:57.209 ' 00:07:57.209 13:39:39 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:57.209 13:39:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.209 13:39:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.209 13:39:39 env -- common/autotest_common.sh@10 -- # set +x 00:07:57.209 ************************************ 00:07:57.209 START TEST env_memory 00:07:57.209 ************************************ 00:07:57.209 13:39:39 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:57.209 00:07:57.209 00:07:57.209 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.209 http://cunit.sourceforge.net/ 00:07:57.209 00:07:57.209 00:07:57.209 Suite: memory 00:07:57.469 Test: alloc and free memory map ...[2024-12-05 13:39:39.801571] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:57.469 passed 00:07:57.469 Test: mem map translation ...[2024-12-05 13:39:39.819159] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:57.469 [2024-12-05 13:39:39.819173] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:57.469 [2024-12-05 13:39:39.819207] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:57.469 [2024-12-05 13:39:39.819213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:57.469 passed 00:07:57.469 Test: mem map registration ...[2024-12-05 13:39:39.854756] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:57.469 [2024-12-05 13:39:39.854776] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:57.469 passed 00:07:57.469 Test: mem map adjacent registrations ...passed 00:07:57.469 00:07:57.469 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.469 suites 1 1 n/a 0 0 00:07:57.469 tests 4 4 4 0 0 00:07:57.469 asserts 152 152 152 0 n/a 00:07:57.469 00:07:57.469 Elapsed time = 0.130 seconds 00:07:57.469 00:07:57.469 real 0m0.143s 00:07:57.469 user 0m0.132s 00:07:57.469 sys 0m0.011s 00:07:57.469 13:39:39 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.469 13:39:39 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:57.469 ************************************ 00:07:57.469 END TEST env_memory 00:07:57.469 ************************************ 00:07:57.469 13:39:39 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:57.469 13:39:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.469 13:39:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.469 13:39:39 env -- common/autotest_common.sh@10 -- # set +x 00:07:57.469 ************************************ 00:07:57.469 START TEST env_vtophys 00:07:57.469 ************************************ 00:07:57.469 13:39:39 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:57.469 EAL: lib.eal log level changed from notice to debug 00:07:57.469 EAL: Detected lcore 0 as core 0 on socket 0 00:07:57.469 EAL: Detected lcore 1 as core 1 on socket 0 00:07:57.469 EAL: Detected lcore 2 as core 2 on socket 0 00:07:57.469 EAL: Detected lcore 3 as core 3 on socket 0 00:07:57.469 EAL: Detected lcore 4 as core 4 on socket 0 00:07:57.469 EAL: Detected lcore 5 as core 5 on socket 0 00:07:57.469 EAL: Detected lcore 6 as core 6 on socket 0 00:07:57.469 EAL: Detected lcore 7 as core 8 on socket 0 00:07:57.469 EAL: Detected lcore 8 as core 9 on socket 0 00:07:57.469 EAL: Detected lcore 9 as core 10 on socket 0 00:07:57.469 EAL: Detected lcore 10 as core 11 on socket 0 00:07:57.469 EAL: Detected lcore 11 as core 12 on socket 0 00:07:57.469 EAL: Detected lcore 12 as core 13 on socket 0 00:07:57.469 EAL: Detected lcore 13 as core 16 on socket 0 00:07:57.469 EAL: Detected lcore 14 as core 17 on socket 0 00:07:57.469 EAL: Detected lcore 15 as core 18 on socket 0 00:07:57.469 EAL: Detected lcore 16 as core 19 on socket 0 00:07:57.469 EAL: Detected lcore 17 as core 20 on socket 0 00:07:57.469 EAL: Detected lcore 18 as core 21 on socket 0 00:07:57.469 EAL: Detected lcore 19 as core 25 on socket 0 00:07:57.469 EAL: Detected lcore 20 as core 26 on socket 0 00:07:57.469 EAL: Detected lcore 21 as core 27 on socket 0 00:07:57.469 EAL: Detected lcore 22 as core 28 on socket 0 00:07:57.469 EAL: Detected lcore 23 as core 29 on socket 0 00:07:57.469 EAL: Detected lcore 24 as core 0 on socket 1 00:07:57.469 EAL: Detected lcore 25 as core 1 on socket 1 00:07:57.469 EAL: Detected lcore 26 as core 2 on socket 1 00:07:57.469 EAL: Detected lcore 27 as core 3 on socket 1 00:07:57.469 EAL: Detected lcore 28 as core 4 on socket 1 00:07:57.469 EAL: Detected lcore 29 as core 5 on socket 1 00:07:57.469 EAL: Detected lcore 30 as core 6 on socket 1 00:07:57.469 EAL: Detected lcore 31 as core 8 on socket 1 00:07:57.469 EAL: Detected lcore 32 as core 10 on socket 1 00:07:57.469 EAL: Detected lcore 33 as core 11 on socket 1 00:07:57.469 EAL: Detected lcore 34 as core 12 on socket 1 00:07:57.469 EAL: Detected lcore 35 as core 13 on socket 1 00:07:57.469 EAL: Detected lcore 36 as core 16 on socket 1 00:07:57.469 EAL: Detected lcore 37 as core 17 on socket 1 00:07:57.469 EAL: Detected lcore 38 as core 18 on socket 1 00:07:57.469 EAL: Detected lcore 39 as core 19 on socket 1 00:07:57.469 EAL: Detected lcore 40 as core 20 on socket 1 00:07:57.469 EAL: Detected lcore 41 as core 21 on socket 1 00:07:57.469 EAL: Detected lcore 42 as core 24 on socket 1 00:07:57.469 EAL: Detected lcore 43 as core 25 on socket 1 00:07:57.469 EAL: Detected lcore 44 as core 26 on socket 1 00:07:57.469 EAL: Detected lcore 45 as core 27 on socket 1 00:07:57.469 EAL: Detected lcore 46 as core 28 on socket 1 00:07:57.469 EAL: Detected lcore 47 as core 29 on socket 1 00:07:57.469 EAL: Detected lcore 48 as core 0 on socket 0 00:07:57.469 EAL: Detected lcore 49 as core 1 on socket 0 00:07:57.469 EAL: Detected lcore 50 as core 2 on socket 0 00:07:57.469 EAL: Detected lcore 51 as core 3 on socket 0 00:07:57.469 EAL: Detected lcore 52 as core 4 on socket 0 00:07:57.469 EAL: Detected lcore 53 as core 5 on socket 0 00:07:57.469 EAL: Detected lcore 54 as core 6 on socket 0 00:07:57.469 EAL: Detected lcore 55 as core 8 on socket 0 00:07:57.469 EAL: Detected lcore 56 as core 9 on socket 0 00:07:57.469 EAL: Detected lcore 57 as core 10 on socket 0 00:07:57.469 EAL: Detected lcore 58 as core 11 on socket 0 00:07:57.469 EAL: Detected lcore 59 as core 12 on socket 0 00:07:57.469 EAL: Detected lcore 60 as core 13 on socket 0 00:07:57.469 EAL: Detected lcore 61 as core 16 on socket 0 00:07:57.469 EAL: Detected lcore 62 as core 17 on socket 0 00:07:57.469 EAL: Detected lcore 63 as core 18 on socket 0 00:07:57.469 EAL: Detected lcore 64 as core 19 on socket 0 00:07:57.469 EAL: Detected lcore 65 as core 20 on socket 0 00:07:57.469 EAL: Detected lcore 66 as core 21 on socket 0 00:07:57.469 EAL: Detected lcore 67 as core 25 on socket 0 00:07:57.469 EAL: Detected lcore 68 as core 26 on socket 0 00:07:57.469 EAL: Detected lcore 69 as core 27 on socket 0 00:07:57.469 EAL: Detected lcore 70 as core 28 on socket 0 00:07:57.469 EAL: Detected lcore 71 as core 29 on socket 0 00:07:57.469 EAL: Detected lcore 72 as core 0 on socket 1 00:07:57.469 EAL: Detected lcore 73 as core 1 on socket 1 00:07:57.469 EAL: Detected lcore 74 as core 2 on socket 1 00:07:57.469 EAL: Detected lcore 75 as core 3 on socket 1 00:07:57.469 EAL: Detected lcore 76 as core 4 on socket 1 00:07:57.469 EAL: Detected lcore 77 as core 5 on socket 1 00:07:57.469 EAL: Detected lcore 78 as core 6 on socket 1 00:07:57.469 EAL: Detected lcore 79 as core 8 on socket 1 00:07:57.469 EAL: Detected lcore 80 as core 10 on socket 1 00:07:57.469 EAL: Detected lcore 81 as core 11 on socket 1 00:07:57.469 EAL: Detected lcore 82 as core 12 on socket 1 00:07:57.469 EAL: Detected lcore 83 as core 13 on socket 1 00:07:57.469 EAL: Detected lcore 84 as core 16 on socket 1 00:07:57.469 EAL: Detected lcore 85 as core 17 on socket 1 00:07:57.469 EAL: Detected lcore 86 as core 18 on socket 1 00:07:57.469 EAL: Detected lcore 87 as core 19 on socket 1 00:07:57.469 EAL: Detected lcore 88 as core 20 on socket 1 00:07:57.469 EAL: Detected lcore 89 as core 21 on socket 1 00:07:57.469 EAL: Detected lcore 90 as core 24 on socket 1 00:07:57.469 EAL: Detected lcore 91 as core 25 on socket 1 00:07:57.469 EAL: Detected lcore 92 as core 26 on socket 1 00:07:57.469 EAL: Detected lcore 93 as core 27 on socket 1 00:07:57.469 EAL: Detected lcore 94 as core 28 on socket 1 00:07:57.469 EAL: Detected lcore 95 as core 29 on socket 1 00:07:57.469 EAL: Maximum logical cores by configuration: 128 00:07:57.469 EAL: Detected CPU lcores: 96 00:07:57.469 EAL: Detected NUMA nodes: 2 00:07:57.469 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:57.469 EAL: Detected shared linkage of DPDK 00:07:57.469 EAL: No shared files mode enabled, IPC will be disabled 00:07:57.469 EAL: Bus pci wants IOVA as 'DC' 00:07:57.469 EAL: Buses did not request a specific IOVA mode. 00:07:57.469 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:57.469 EAL: Selected IOVA mode 'VA' 00:07:57.469 EAL: Probing VFIO support... 00:07:57.469 EAL: IOMMU type 1 (Type 1) is supported 00:07:57.469 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:57.469 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:57.469 EAL: VFIO support initialized 00:07:57.469 EAL: Ask a virtual area of 0x2e000 bytes 00:07:57.469 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:57.469 EAL: Setting up physically contiguous memory... 00:07:57.469 EAL: Setting maximum number of open files to 524288 00:07:57.469 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:57.469 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:57.469 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:57.469 EAL: Ask a virtual area of 0x61000 bytes 00:07:57.469 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:57.469 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:57.469 EAL: Ask a virtual area of 0x400000000 bytes 00:07:57.469 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:57.469 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:57.469 EAL: Ask a virtual area of 0x61000 bytes 00:07:57.469 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:57.469 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:57.469 EAL: Ask a virtual area of 0x400000000 bytes 00:07:57.469 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:57.470 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:57.470 EAL: Ask a virtual area of 0x61000 bytes 00:07:57.470 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:57.470 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:57.470 EAL: Ask a virtual area of 0x400000000 bytes 00:07:57.470 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:57.470 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:57.470 EAL: Ask a virtual area of 0x61000 bytes 00:07:57.470 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:57.470 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:57.470 EAL: Ask a virtual area of 0x400000000 bytes 00:07:57.470 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:57.470 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:57.470 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:57.470 EAL: Ask a virtual area of 0x61000 bytes 00:07:57.470 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:57.470 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:57.470 EAL: Ask a virtual area of 0x400000000 bytes 00:07:57.470 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:57.470 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:57.470 EAL: Ask a virtual area of 0x61000 bytes 00:07:57.470 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:57.470 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:57.470 EAL: Ask a virtual area of 0x400000000 bytes 00:07:57.470 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:57.470 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:57.470 EAL: Ask a virtual area of 0x61000 bytes 00:07:57.470 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:57.470 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:57.470 EAL: Ask a virtual area of 0x400000000 bytes 00:07:57.470 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:57.470 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:57.470 EAL: Ask a virtual area of 0x61000 bytes 00:07:57.470 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:57.470 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:57.470 EAL: Ask a virtual area of 0x400000000 bytes 00:07:57.470 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:57.470 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:57.470 EAL: Hugepages will be freed exactly as allocated. 00:07:57.470 EAL: No shared files mode enabled, IPC is disabled 00:07:57.470 EAL: No shared files mode enabled, IPC is disabled 00:07:57.470 EAL: TSC frequency is ~2100000 KHz 00:07:57.470 EAL: Main lcore 0 is ready (tid=7ff7031c9a00;cpuset=[0]) 00:07:57.470 EAL: Trying to obtain current memory policy. 00:07:57.470 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:57.470 EAL: Restoring previous memory policy: 0 00:07:57.470 EAL: request: mp_malloc_sync 00:07:57.470 EAL: No shared files mode enabled, IPC is disabled 00:07:57.470 EAL: Heap on socket 0 was expanded by 2MB 00:07:57.470 EAL: No shared files mode enabled, IPC is disabled 00:07:57.728 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:57.728 EAL: Mem event callback 'spdk:(nil)' registered 00:07:57.728 00:07:57.728 00:07:57.728 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.728 http://cunit.sourceforge.net/ 00:07:57.728 00:07:57.728 00:07:57.728 Suite: components_suite 00:07:57.728 Test: vtophys_malloc_test ...passed 00:07:57.728 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:57.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:57.728 EAL: Restoring previous memory policy: 4 00:07:57.728 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.728 EAL: request: mp_malloc_sync 00:07:57.728 EAL: No shared files mode enabled, IPC is disabled 00:07:57.728 EAL: Heap on socket 0 was expanded by 4MB 00:07:57.728 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.728 EAL: request: mp_malloc_sync 00:07:57.728 EAL: No shared files mode enabled, IPC is disabled 00:07:57.728 EAL: Heap on socket 0 was shrunk by 4MB 00:07:57.728 EAL: Trying to obtain current memory policy. 00:07:57.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:57.728 EAL: Restoring previous memory policy: 4 00:07:57.728 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.728 EAL: request: mp_malloc_sync 00:07:57.728 EAL: No shared files mode enabled, IPC is disabled 00:07:57.728 EAL: Heap on socket 0 was expanded by 6MB 00:07:57.728 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.728 EAL: request: mp_malloc_sync 00:07:57.728 EAL: No shared files mode enabled, IPC is disabled 00:07:57.728 EAL: Heap on socket 0 was shrunk by 6MB 00:07:57.728 EAL: Trying to obtain current memory policy. 00:07:57.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:57.728 EAL: Restoring previous memory policy: 4 00:07:57.728 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.729 EAL: request: mp_malloc_sync 00:07:57.729 EAL: No shared files mode enabled, IPC is disabled 00:07:57.729 EAL: Heap on socket 0 was expanded by 10MB 00:07:57.729 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.729 EAL: request: mp_malloc_sync 00:07:57.729 EAL: No shared files mode enabled, IPC is disabled 00:07:57.729 EAL: Heap on socket 0 was shrunk by 10MB 00:07:57.729 EAL: Trying to obtain current memory policy. 00:07:57.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:57.729 EAL: Restoring previous memory policy: 4 00:07:57.729 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.729 EAL: request: mp_malloc_sync 00:07:57.729 EAL: No shared files mode enabled, IPC is disabled 00:07:57.729 EAL: Heap on socket 0 was expanded by 18MB 00:07:57.729 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.729 EAL: request: mp_malloc_sync 00:07:57.729 EAL: No shared files mode enabled, IPC is disabled 00:07:57.729 EAL: Heap on socket 0 was shrunk by 18MB 00:07:57.729 EAL: Trying to obtain current memory policy. 00:07:57.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:57.729 EAL: Restoring previous memory policy: 4 00:07:57.729 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.729 EAL: request: mp_malloc_sync 00:07:57.729 EAL: No shared files mode enabled, IPC is disabled 00:07:57.729 EAL: Heap on socket 0 was expanded by 34MB 00:07:57.729 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.729 EAL: request: mp_malloc_sync 00:07:57.729 EAL: No shared files mode enabled, IPC is disabled 00:07:57.729 EAL: Heap on socket 0 was shrunk by 34MB 00:07:57.729 EAL: Trying to obtain current memory policy. 00:07:57.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:57.729 EAL: Restoring previous memory policy: 4 00:07:57.729 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.729 EAL: request: mp_malloc_sync 00:07:57.729 EAL: No shared files mode enabled, IPC is disabled 00:07:57.729 EAL: Heap on socket 0 was expanded by 66MB 00:07:57.729 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.729 EAL: request: mp_malloc_sync 00:07:57.729 EAL: No shared files mode enabled, IPC is disabled 00:07:57.729 EAL: Heap on socket 0 was shrunk by 66MB 00:07:57.729 EAL: Trying to obtain current memory policy. 00:07:57.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:57.729 EAL: Restoring previous memory policy: 4 00:07:57.729 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.729 EAL: request: mp_malloc_sync 00:07:57.729 EAL: No shared files mode enabled, IPC is disabled 00:07:57.729 EAL: Heap on socket 0 was expanded by 130MB 00:07:57.729 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.729 EAL: request: mp_malloc_sync 00:07:57.729 EAL: No shared files mode enabled, IPC is disabled 00:07:57.729 EAL: Heap on socket 0 was shrunk by 130MB 00:07:57.729 EAL: Trying to obtain current memory policy. 00:07:57.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:57.729 EAL: Restoring previous memory policy: 4 00:07:57.729 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.729 EAL: request: mp_malloc_sync 00:07:57.729 EAL: No shared files mode enabled, IPC is disabled 00:07:57.729 EAL: Heap on socket 0 was expanded by 258MB 00:07:57.729 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.729 EAL: request: mp_malloc_sync 00:07:57.729 EAL: No shared files mode enabled, IPC is disabled 00:07:57.729 EAL: Heap on socket 0 was shrunk by 258MB 00:07:57.729 EAL: Trying to obtain current memory policy. 00:07:57.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:57.988 EAL: Restoring previous memory policy: 4 00:07:57.988 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.988 EAL: request: mp_malloc_sync 00:07:57.988 EAL: No shared files mode enabled, IPC is disabled 00:07:57.988 EAL: Heap on socket 0 was expanded by 514MB 00:07:57.988 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.988 EAL: request: mp_malloc_sync 00:07:57.988 EAL: No shared files mode enabled, IPC is disabled 00:07:57.988 EAL: Heap on socket 0 was shrunk by 514MB 00:07:57.988 EAL: Trying to obtain current memory policy. 00:07:57.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:58.247 EAL: Restoring previous memory policy: 4 00:07:58.247 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.247 EAL: request: mp_malloc_sync 00:07:58.247 EAL: No shared files mode enabled, IPC is disabled 00:07:58.247 EAL: Heap on socket 0 was expanded by 1026MB 00:07:58.506 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.506 EAL: request: mp_malloc_sync 00:07:58.506 EAL: No shared files mode enabled, IPC is disabled 00:07:58.506 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:58.506 passed 00:07:58.506 00:07:58.506 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.506 suites 1 1 n/a 0 0 00:07:58.506 tests 2 2 2 0 0 00:07:58.506 asserts 497 497 497 0 n/a 00:07:58.506 00:07:58.506 Elapsed time = 0.973 seconds 00:07:58.506 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.506 EAL: request: mp_malloc_sync 00:07:58.506 EAL: No shared files mode enabled, IPC is disabled 00:07:58.506 EAL: Heap on socket 0 was shrunk by 2MB 00:07:58.506 EAL: No shared files mode enabled, IPC is disabled 00:07:58.506 EAL: No shared files mode enabled, IPC is disabled 00:07:58.506 EAL: No shared files mode enabled, IPC is disabled 00:07:58.506 00:07:58.506 real 0m1.104s 00:07:58.506 user 0m0.655s 00:07:58.506 sys 0m0.424s 00:07:58.506 13:39:41 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.506 13:39:41 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:58.506 ************************************ 00:07:58.506 END TEST env_vtophys 00:07:58.506 ************************************ 00:07:58.765 13:39:41 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:58.765 13:39:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.765 13:39:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.765 13:39:41 env -- common/autotest_common.sh@10 -- # set +x 00:07:58.765 ************************************ 00:07:58.765 START TEST env_pci 00:07:58.765 ************************************ 00:07:58.765 13:39:41 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:58.765 00:07:58.765 00:07:58.765 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.765 http://cunit.sourceforge.net/ 00:07:58.765 00:07:58.765 00:07:58.765 Suite: pci 00:07:58.765 Test: pci_hook ...[2024-12-05 13:39:41.164151] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 469721 has claimed it 00:07:58.765 EAL: Cannot find device (10000:00:01.0) 00:07:58.765 EAL: Failed to attach device on primary process 00:07:58.765 passed 00:07:58.765 00:07:58.765 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.765 suites 1 1 n/a 0 0 00:07:58.765 tests 1 1 1 0 0 00:07:58.765 asserts 25 25 25 0 n/a 00:07:58.765 00:07:58.765 Elapsed time = 0.026 seconds 00:07:58.765 00:07:58.765 real 0m0.045s 00:07:58.765 user 0m0.016s 00:07:58.765 sys 0m0.029s 00:07:58.765 13:39:41 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.765 13:39:41 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:58.765 ************************************ 00:07:58.765 END TEST env_pci 00:07:58.765 ************************************ 00:07:58.765 13:39:41 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:58.765 13:39:41 env -- env/env.sh@15 -- # uname 00:07:58.765 13:39:41 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:58.765 13:39:41 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:58.765 13:39:41 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:58.765 13:39:41 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:58.765 13:39:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.765 13:39:41 env -- common/autotest_common.sh@10 -- # set +x 00:07:58.765 ************************************ 00:07:58.765 START TEST env_dpdk_post_init 00:07:58.765 ************************************ 00:07:58.765 13:39:41 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:58.765 EAL: Detected CPU lcores: 96 00:07:58.765 EAL: Detected NUMA nodes: 2 00:07:58.765 EAL: Detected shared linkage of DPDK 00:07:58.765 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:58.765 EAL: Selected IOVA mode 'VA' 00:07:58.765 EAL: VFIO support initialized 00:07:58.765 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:59.023 EAL: Using IOMMU type 1 (Type 1) 00:07:59.023 EAL: Ignore mapping IO port bar(1) 00:07:59.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:07:59.023 EAL: Ignore mapping IO port bar(1) 00:07:59.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:07:59.023 EAL: Ignore mapping IO port bar(1) 00:07:59.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:07:59.023 EAL: Ignore mapping IO port bar(1) 00:07:59.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:07:59.023 EAL: Ignore mapping IO port bar(1) 00:07:59.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:07:59.023 EAL: Ignore mapping IO port bar(1) 00:07:59.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:07:59.023 EAL: Ignore mapping IO port bar(1) 00:07:59.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:07:59.023 EAL: Ignore mapping IO port bar(1) 00:07:59.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:07:59.960 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:07:59.960 EAL: Ignore mapping IO port bar(1) 00:07:59.960 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:07:59.960 EAL: Ignore mapping IO port bar(1) 00:07:59.960 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:07:59.960 EAL: Ignore mapping IO port bar(1) 00:07:59.960 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:07:59.960 EAL: Ignore mapping IO port bar(1) 00:07:59.960 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:07:59.960 EAL: Ignore mapping IO port bar(1) 00:07:59.960 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:07:59.960 EAL: Ignore mapping IO port bar(1) 00:07:59.960 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:07:59.960 EAL: Ignore mapping IO port bar(1) 00:07:59.960 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:07:59.960 EAL: Ignore mapping IO port bar(1) 00:07:59.960 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:08:03.265 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:08:03.265 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:08:03.832 Starting DPDK initialization... 00:08:03.832 Starting SPDK post initialization... 00:08:03.832 SPDK NVMe probe 00:08:03.832 Attaching to 0000:5e:00.0 00:08:03.832 Attached to 0000:5e:00.0 00:08:03.832 Cleaning up... 00:08:03.832 00:08:03.832 real 0m4.943s 00:08:03.832 user 0m3.508s 00:08:03.832 sys 0m0.506s 00:08:03.832 13:39:46 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.832 13:39:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:03.832 ************************************ 00:08:03.832 END TEST env_dpdk_post_init 00:08:03.832 ************************************ 00:08:03.832 13:39:46 env -- env/env.sh@26 -- # uname 00:08:03.832 13:39:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:03.832 13:39:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:03.832 13:39:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.832 13:39:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.832 13:39:46 env -- common/autotest_common.sh@10 -- # set +x 00:08:03.832 ************************************ 00:08:03.832 START TEST env_mem_callbacks 00:08:03.832 ************************************ 00:08:03.832 13:39:46 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:03.832 EAL: Detected CPU lcores: 96 00:08:03.832 EAL: Detected NUMA nodes: 2 00:08:03.832 EAL: Detected shared linkage of DPDK 00:08:03.832 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:03.832 EAL: Selected IOVA mode 'VA' 00:08:03.832 EAL: VFIO support initialized 00:08:03.832 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:03.832 00:08:03.832 00:08:03.832 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.832 http://cunit.sourceforge.net/ 00:08:03.832 00:08:03.832 00:08:03.832 Suite: memory 00:08:03.832 Test: test ... 00:08:03.832 register 0x200000200000 2097152 00:08:03.832 malloc 3145728 00:08:03.832 register 0x200000400000 4194304 00:08:03.832 buf 0x200000500000 len 3145728 PASSED 00:08:03.832 malloc 64 00:08:03.832 buf 0x2000004fff40 len 64 PASSED 00:08:03.832 malloc 4194304 00:08:03.832 register 0x200000800000 6291456 00:08:03.832 buf 0x200000a00000 len 4194304 PASSED 00:08:03.832 free 0x200000500000 3145728 00:08:03.832 free 0x2000004fff40 64 00:08:03.832 unregister 0x200000400000 4194304 PASSED 00:08:03.832 free 0x200000a00000 4194304 00:08:03.832 unregister 0x200000800000 6291456 PASSED 00:08:03.832 malloc 8388608 00:08:03.832 register 0x200000400000 10485760 00:08:03.832 buf 0x200000600000 len 8388608 PASSED 00:08:03.832 free 0x200000600000 8388608 00:08:03.832 unregister 0x200000400000 10485760 PASSED 00:08:03.832 passed 00:08:03.832 00:08:03.832 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.832 suites 1 1 n/a 0 0 00:08:03.832 tests 1 1 1 0 0 00:08:03.832 asserts 15 15 15 0 n/a 00:08:03.832 00:08:03.832 Elapsed time = 0.008 seconds 00:08:03.832 00:08:03.832 real 0m0.057s 00:08:03.832 user 0m0.026s 00:08:03.832 sys 0m0.030s 00:08:03.832 13:39:46 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.832 13:39:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:03.832 ************************************ 00:08:03.832 END TEST env_mem_callbacks 00:08:03.832 ************************************ 00:08:03.832 00:08:03.832 real 0m6.819s 00:08:03.832 user 0m4.572s 00:08:03.832 sys 0m1.327s 00:08:03.832 13:39:46 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.832 13:39:46 env -- common/autotest_common.sh@10 -- # set +x 00:08:03.832 ************************************ 00:08:03.832 END TEST env 00:08:03.832 ************************************ 00:08:03.832 13:39:46 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:03.832 13:39:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.832 13:39:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.832 13:39:46 -- common/autotest_common.sh@10 -- # set +x 00:08:04.090 ************************************ 00:08:04.090 START TEST rpc 00:08:04.090 ************************************ 00:08:04.090 13:39:46 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:04.090 * Looking for test storage... 00:08:04.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:04.090 13:39:46 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:04.090 13:39:46 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:04.090 13:39:46 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:04.090 13:39:46 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:04.090 13:39:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.090 13:39:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.090 13:39:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.090 13:39:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.090 13:39:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.090 13:39:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.090 13:39:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.090 13:39:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.090 13:39:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.090 13:39:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.090 13:39:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.090 13:39:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:04.090 13:39:46 rpc -- scripts/common.sh@345 -- # : 1 00:08:04.090 13:39:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.090 13:39:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.090 13:39:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:04.090 13:39:46 rpc -- scripts/common.sh@353 -- # local d=1 00:08:04.090 13:39:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.090 13:39:46 rpc -- scripts/common.sh@355 -- # echo 1 00:08:04.090 13:39:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.090 13:39:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:04.090 13:39:46 rpc -- scripts/common.sh@353 -- # local d=2 00:08:04.090 13:39:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.090 13:39:46 rpc -- scripts/common.sh@355 -- # echo 2 00:08:04.090 13:39:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.090 13:39:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.090 13:39:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.091 13:39:46 rpc -- scripts/common.sh@368 -- # return 0 00:08:04.091 13:39:46 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.091 13:39:46 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:04.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.091 --rc genhtml_branch_coverage=1 00:08:04.091 --rc genhtml_function_coverage=1 00:08:04.091 --rc genhtml_legend=1 00:08:04.091 --rc geninfo_all_blocks=1 00:08:04.091 --rc geninfo_unexecuted_blocks=1 00:08:04.091 00:08:04.091 ' 00:08:04.091 13:39:46 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:04.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.091 --rc genhtml_branch_coverage=1 00:08:04.091 --rc genhtml_function_coverage=1 00:08:04.091 --rc genhtml_legend=1 00:08:04.091 --rc geninfo_all_blocks=1 00:08:04.091 --rc geninfo_unexecuted_blocks=1 00:08:04.091 00:08:04.091 ' 00:08:04.091 13:39:46 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:04.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.091 --rc genhtml_branch_coverage=1 00:08:04.091 --rc genhtml_function_coverage=1 00:08:04.091 --rc genhtml_legend=1 00:08:04.091 --rc geninfo_all_blocks=1 00:08:04.091 --rc geninfo_unexecuted_blocks=1 00:08:04.091 00:08:04.091 ' 00:08:04.091 13:39:46 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:04.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.091 --rc genhtml_branch_coverage=1 00:08:04.091 --rc genhtml_function_coverage=1 00:08:04.091 --rc genhtml_legend=1 00:08:04.091 --rc geninfo_all_blocks=1 00:08:04.091 --rc geninfo_unexecuted_blocks=1 00:08:04.091 00:08:04.091 ' 00:08:04.091 13:39:46 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:04.091 13:39:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=470778 00:08:04.091 13:39:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:04.091 13:39:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 470778 00:08:04.091 13:39:46 rpc -- common/autotest_common.sh@835 -- # '[' -z 470778 ']' 00:08:04.091 13:39:46 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.091 13:39:46 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.091 13:39:46 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.091 13:39:46 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.091 13:39:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.091 [2024-12-05 13:39:46.660929] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:04.091 [2024-12-05 13:39:46.660976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470778 ] 00:08:04.350 [2024-12-05 13:39:46.733780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.350 [2024-12-05 13:39:46.772965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:04.350 [2024-12-05 13:39:46.773003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 470778' to capture a snapshot of events at runtime. 00:08:04.350 [2024-12-05 13:39:46.773010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.350 [2024-12-05 13:39:46.773017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.350 [2024-12-05 13:39:46.773022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid470778 for offline analysis/debug. 00:08:04.350 [2024-12-05 13:39:46.773600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.609 13:39:46 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.609 13:39:46 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:04.609 13:39:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:04.609 13:39:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:04.609 13:39:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:04.609 13:39:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:04.609 13:39:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.609 13:39:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.609 13:39:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.609 ************************************ 00:08:04.609 START TEST rpc_integrity 00:08:04.609 ************************************ 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:04.609 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.609 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:04.609 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:04.609 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:04.609 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.609 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:04.609 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.609 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:04.609 { 00:08:04.609 "name": "Malloc0", 00:08:04.609 "aliases": [ 00:08:04.609 "d58b391b-32f6-4e81-ba1d-1eccccde7d20" 00:08:04.609 ], 00:08:04.609 "product_name": "Malloc disk", 00:08:04.609 "block_size": 512, 00:08:04.609 "num_blocks": 16384, 00:08:04.609 "uuid": "d58b391b-32f6-4e81-ba1d-1eccccde7d20", 00:08:04.609 "assigned_rate_limits": { 00:08:04.609 "rw_ios_per_sec": 0, 00:08:04.609 "rw_mbytes_per_sec": 0, 00:08:04.609 "r_mbytes_per_sec": 0, 00:08:04.609 "w_mbytes_per_sec": 0 00:08:04.609 }, 00:08:04.609 "claimed": false, 00:08:04.609 "zoned": false, 00:08:04.609 "supported_io_types": { 00:08:04.609 "read": true, 00:08:04.609 "write": true, 00:08:04.609 "unmap": true, 00:08:04.609 "flush": true, 00:08:04.609 "reset": true, 00:08:04.609 "nvme_admin": false, 00:08:04.609 "nvme_io": false, 00:08:04.609 "nvme_io_md": false, 00:08:04.609 "write_zeroes": true, 00:08:04.609 "zcopy": true, 00:08:04.609 "get_zone_info": false, 00:08:04.609 "zone_management": false, 00:08:04.609 "zone_append": false, 00:08:04.609 "compare": false, 00:08:04.609 "compare_and_write": false, 00:08:04.609 "abort": true, 00:08:04.609 "seek_hole": false, 00:08:04.609 "seek_data": false, 00:08:04.609 "copy": true, 00:08:04.609 "nvme_iov_md": false 00:08:04.609 }, 00:08:04.609 "memory_domains": [ 00:08:04.609 { 00:08:04.609 "dma_device_id": "system", 00:08:04.609 "dma_device_type": 1 00:08:04.609 }, 00:08:04.609 { 00:08:04.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.609 "dma_device_type": 2 00:08:04.609 } 00:08:04.609 ], 00:08:04.609 "driver_specific": {} 00:08:04.609 } 00:08:04.609 ]' 00:08:04.609 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:04.609 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:04.609 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.609 [2024-12-05 13:39:47.155455] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:04.609 [2024-12-05 13:39:47.155484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.609 [2024-12-05 13:39:47.155496] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13bfc00 00:08:04.609 [2024-12-05 13:39:47.155502] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.609 [2024-12-05 13:39:47.156589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.609 [2024-12-05 13:39:47.156611] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:04.609 Passthru0 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.609 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.609 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.609 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:04.609 { 00:08:04.610 "name": "Malloc0", 00:08:04.610 "aliases": [ 00:08:04.610 "d58b391b-32f6-4e81-ba1d-1eccccde7d20" 00:08:04.610 ], 00:08:04.610 "product_name": "Malloc disk", 00:08:04.610 "block_size": 512, 00:08:04.610 "num_blocks": 16384, 00:08:04.610 "uuid": "d58b391b-32f6-4e81-ba1d-1eccccde7d20", 00:08:04.610 "assigned_rate_limits": { 00:08:04.610 "rw_ios_per_sec": 0, 00:08:04.610 "rw_mbytes_per_sec": 0, 00:08:04.610 "r_mbytes_per_sec": 0, 00:08:04.610 "w_mbytes_per_sec": 0 00:08:04.610 }, 00:08:04.610 "claimed": true, 00:08:04.610 "claim_type": "exclusive_write", 00:08:04.610 "zoned": false, 00:08:04.610 "supported_io_types": { 00:08:04.610 "read": true, 00:08:04.610 "write": true, 00:08:04.610 "unmap": true, 00:08:04.610 "flush": true, 00:08:04.610 "reset": true, 00:08:04.610 "nvme_admin": false, 00:08:04.610 "nvme_io": false, 00:08:04.610 "nvme_io_md": false, 00:08:04.610 "write_zeroes": true, 00:08:04.610 "zcopy": true, 00:08:04.610 "get_zone_info": false, 00:08:04.610 "zone_management": false, 00:08:04.610 "zone_append": false, 00:08:04.610 "compare": false, 00:08:04.610 "compare_and_write": false, 00:08:04.610 "abort": true, 00:08:04.610 "seek_hole": false, 00:08:04.610 "seek_data": false, 00:08:04.610 "copy": true, 00:08:04.610 "nvme_iov_md": false 00:08:04.610 }, 00:08:04.610 "memory_domains": [ 00:08:04.610 { 00:08:04.610 "dma_device_id": "system", 00:08:04.610 "dma_device_type": 1 00:08:04.610 }, 00:08:04.610 { 00:08:04.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.610 "dma_device_type": 2 00:08:04.610 } 00:08:04.610 ], 00:08:04.610 "driver_specific": {} 00:08:04.610 }, 00:08:04.610 { 00:08:04.610 "name": "Passthru0", 00:08:04.610 "aliases": [ 00:08:04.610 "24be13e4-dcd1-571f-9ba0-db32e7ca3950" 00:08:04.610 ], 00:08:04.610 "product_name": "passthru", 00:08:04.610 "block_size": 512, 00:08:04.610 "num_blocks": 16384, 00:08:04.610 "uuid": "24be13e4-dcd1-571f-9ba0-db32e7ca3950", 00:08:04.610 "assigned_rate_limits": { 00:08:04.610 "rw_ios_per_sec": 0, 00:08:04.610 "rw_mbytes_per_sec": 0, 00:08:04.610 "r_mbytes_per_sec": 0, 00:08:04.610 "w_mbytes_per_sec": 0 00:08:04.610 }, 00:08:04.610 "claimed": false, 00:08:04.610 "zoned": false, 00:08:04.610 "supported_io_types": { 00:08:04.610 "read": true, 00:08:04.610 "write": true, 00:08:04.610 "unmap": true, 00:08:04.610 "flush": true, 00:08:04.610 "reset": true, 00:08:04.610 "nvme_admin": false, 00:08:04.610 "nvme_io": false, 00:08:04.610 "nvme_io_md": false, 00:08:04.610 "write_zeroes": true, 00:08:04.610 "zcopy": true, 00:08:04.610 "get_zone_info": false, 00:08:04.610 "zone_management": false, 00:08:04.610 "zone_append": false, 00:08:04.610 "compare": false, 00:08:04.610 "compare_and_write": false, 00:08:04.610 "abort": true, 00:08:04.610 "seek_hole": false, 00:08:04.610 "seek_data": false, 00:08:04.610 "copy": true, 00:08:04.610 "nvme_iov_md": false 00:08:04.610 }, 00:08:04.610 "memory_domains": [ 00:08:04.610 { 00:08:04.610 "dma_device_id": "system", 00:08:04.610 "dma_device_type": 1 00:08:04.610 }, 00:08:04.610 { 00:08:04.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.610 "dma_device_type": 2 00:08:04.610 } 00:08:04.610 ], 00:08:04.610 "driver_specific": { 00:08:04.610 "passthru": { 00:08:04.610 "name": "Passthru0", 00:08:04.610 "base_bdev_name": "Malloc0" 00:08:04.610 } 00:08:04.610 } 00:08:04.610 } 00:08:04.610 ]' 00:08:04.610 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:04.867 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:04.867 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:04.867 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.867 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.867 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.867 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:04.867 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.867 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.867 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.867 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:04.867 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.867 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.867 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.867 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:04.867 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:04.867 13:39:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:04.867 00:08:04.867 real 0m0.271s 00:08:04.867 user 0m0.163s 00:08:04.867 sys 0m0.041s 00:08:04.867 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.867 13:39:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.867 ************************************ 00:08:04.867 END TEST rpc_integrity 00:08:04.867 ************************************ 00:08:04.867 13:39:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:04.867 13:39:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.868 13:39:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.868 13:39:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.868 ************************************ 00:08:04.868 START TEST rpc_plugins 00:08:04.868 ************************************ 00:08:04.868 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:04.868 13:39:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:04.868 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.868 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:04.868 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.868 13:39:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:04.868 13:39:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:04.868 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.868 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:04.868 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.868 13:39:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:04.868 { 00:08:04.868 "name": "Malloc1", 00:08:04.868 "aliases": [ 00:08:04.868 "89dc3707-95b1-4f8f-826e-9a0fe408db34" 00:08:04.868 ], 00:08:04.868 "product_name": "Malloc disk", 00:08:04.868 "block_size": 4096, 00:08:04.868 "num_blocks": 256, 00:08:04.868 "uuid": "89dc3707-95b1-4f8f-826e-9a0fe408db34", 00:08:04.868 "assigned_rate_limits": { 00:08:04.868 "rw_ios_per_sec": 0, 00:08:04.868 "rw_mbytes_per_sec": 0, 00:08:04.868 "r_mbytes_per_sec": 0, 00:08:04.868 "w_mbytes_per_sec": 0 00:08:04.868 }, 00:08:04.868 "claimed": false, 00:08:04.868 "zoned": false, 00:08:04.868 "supported_io_types": { 00:08:04.868 "read": true, 00:08:04.868 "write": true, 00:08:04.868 "unmap": true, 00:08:04.868 "flush": true, 00:08:04.868 "reset": true, 00:08:04.868 "nvme_admin": false, 00:08:04.868 "nvme_io": false, 00:08:04.868 "nvme_io_md": false, 00:08:04.868 "write_zeroes": true, 00:08:04.868 "zcopy": true, 00:08:04.868 "get_zone_info": false, 00:08:04.868 "zone_management": false, 00:08:04.868 "zone_append": false, 00:08:04.868 "compare": false, 00:08:04.868 "compare_and_write": false, 00:08:04.868 "abort": true, 00:08:04.868 "seek_hole": false, 00:08:04.868 "seek_data": false, 00:08:04.868 "copy": true, 00:08:04.868 "nvme_iov_md": false 00:08:04.868 }, 00:08:04.868 "memory_domains": [ 00:08:04.868 { 00:08:04.868 "dma_device_id": "system", 00:08:04.868 "dma_device_type": 1 00:08:04.868 }, 00:08:04.868 { 00:08:04.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.868 "dma_device_type": 2 00:08:04.868 } 00:08:04.868 ], 00:08:04.868 "driver_specific": {} 00:08:04.868 } 00:08:04.868 ]' 00:08:04.868 13:39:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:04.868 13:39:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:04.868 13:39:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:04.868 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.868 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:05.125 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.125 13:39:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:05.125 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.125 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:05.125 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.125 13:39:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:05.125 13:39:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:05.125 13:39:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:05.125 00:08:05.125 real 0m0.144s 00:08:05.125 user 0m0.083s 00:08:05.125 sys 0m0.023s 00:08:05.125 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.125 13:39:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:05.125 ************************************ 00:08:05.125 END TEST rpc_plugins 00:08:05.125 ************************************ 00:08:05.125 13:39:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:05.125 13:39:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.125 13:39:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.125 13:39:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.125 ************************************ 00:08:05.125 START TEST rpc_trace_cmd_test 00:08:05.125 ************************************ 00:08:05.125 13:39:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:05.125 13:39:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:05.125 13:39:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:05.125 13:39:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.125 13:39:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.125 13:39:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.125 13:39:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:05.125 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid470778", 00:08:05.125 "tpoint_group_mask": "0x8", 00:08:05.125 "iscsi_conn": { 00:08:05.125 "mask": "0x2", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "scsi": { 00:08:05.125 "mask": "0x4", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "bdev": { 00:08:05.125 "mask": "0x8", 00:08:05.125 "tpoint_mask": "0xffffffffffffffff" 00:08:05.125 }, 00:08:05.125 "nvmf_rdma": { 00:08:05.125 "mask": "0x10", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "nvmf_tcp": { 00:08:05.125 "mask": "0x20", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "ftl": { 00:08:05.125 "mask": "0x40", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "blobfs": { 00:08:05.125 "mask": "0x80", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "dsa": { 00:08:05.125 "mask": "0x200", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "thread": { 00:08:05.125 "mask": "0x400", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "nvme_pcie": { 00:08:05.125 "mask": "0x800", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "iaa": { 00:08:05.125 "mask": "0x1000", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "nvme_tcp": { 00:08:05.125 "mask": "0x2000", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "bdev_nvme": { 00:08:05.125 "mask": "0x4000", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "sock": { 00:08:05.125 "mask": "0x8000", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "blob": { 00:08:05.125 "mask": "0x10000", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "bdev_raid": { 00:08:05.125 "mask": "0x20000", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 }, 00:08:05.125 "scheduler": { 00:08:05.125 "mask": "0x40000", 00:08:05.125 "tpoint_mask": "0x0" 00:08:05.125 } 00:08:05.125 }' 00:08:05.125 13:39:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:05.125 13:39:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:05.125 13:39:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:05.125 13:39:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:05.125 13:39:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:05.384 13:39:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:05.384 13:39:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:05.384 13:39:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:05.384 13:39:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:05.384 13:39:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:05.384 00:08:05.384 real 0m0.223s 00:08:05.384 user 0m0.187s 00:08:05.384 sys 0m0.026s 00:08:05.384 13:39:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.384 13:39:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.384 ************************************ 00:08:05.384 END TEST rpc_trace_cmd_test 00:08:05.384 ************************************ 00:08:05.384 13:39:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:05.384 13:39:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:05.384 13:39:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:05.384 13:39:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.384 13:39:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.384 13:39:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.384 ************************************ 00:08:05.384 START TEST rpc_daemon_integrity 00:08:05.384 ************************************ 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:05.384 { 00:08:05.384 "name": "Malloc2", 00:08:05.384 "aliases": [ 00:08:05.384 "9f311f62-d3b1-4dec-8b31-2a4e396e3524" 00:08:05.384 ], 00:08:05.384 "product_name": "Malloc disk", 00:08:05.384 "block_size": 512, 00:08:05.384 "num_blocks": 16384, 00:08:05.384 "uuid": "9f311f62-d3b1-4dec-8b31-2a4e396e3524", 00:08:05.384 "assigned_rate_limits": { 00:08:05.384 "rw_ios_per_sec": 0, 00:08:05.384 "rw_mbytes_per_sec": 0, 00:08:05.384 "r_mbytes_per_sec": 0, 00:08:05.384 "w_mbytes_per_sec": 0 00:08:05.384 }, 00:08:05.384 "claimed": false, 00:08:05.384 "zoned": false, 00:08:05.384 "supported_io_types": { 00:08:05.384 "read": true, 00:08:05.384 "write": true, 00:08:05.384 "unmap": true, 00:08:05.384 "flush": true, 00:08:05.384 "reset": true, 00:08:05.384 "nvme_admin": false, 00:08:05.384 "nvme_io": false, 00:08:05.384 "nvme_io_md": false, 00:08:05.384 "write_zeroes": true, 00:08:05.384 "zcopy": true, 00:08:05.384 "get_zone_info": false, 00:08:05.384 "zone_management": false, 00:08:05.384 "zone_append": false, 00:08:05.384 "compare": false, 00:08:05.384 "compare_and_write": false, 00:08:05.384 "abort": true, 00:08:05.384 "seek_hole": false, 00:08:05.384 "seek_data": false, 00:08:05.384 "copy": true, 00:08:05.384 "nvme_iov_md": false 00:08:05.384 }, 00:08:05.384 "memory_domains": [ 00:08:05.384 { 00:08:05.384 "dma_device_id": "system", 00:08:05.384 "dma_device_type": 1 00:08:05.384 }, 00:08:05.384 { 00:08:05.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.384 "dma_device_type": 2 00:08:05.384 } 00:08:05.384 ], 00:08:05.384 "driver_specific": {} 00:08:05.384 } 00:08:05.384 ]' 00:08:05.384 13:39:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.643 [2024-12-05 13:39:48.009763] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:05.643 [2024-12-05 13:39:48.009792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.643 [2024-12-05 13:39:48.009804] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x138d4e0 00:08:05.643 [2024-12-05 13:39:48.009810] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.643 [2024-12-05 13:39:48.010782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.643 [2024-12-05 13:39:48.010802] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:05.643 Passthru0 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:05.643 { 00:08:05.643 "name": "Malloc2", 00:08:05.643 "aliases": [ 00:08:05.643 "9f311f62-d3b1-4dec-8b31-2a4e396e3524" 00:08:05.643 ], 00:08:05.643 "product_name": "Malloc disk", 00:08:05.643 "block_size": 512, 00:08:05.643 "num_blocks": 16384, 00:08:05.643 "uuid": "9f311f62-d3b1-4dec-8b31-2a4e396e3524", 00:08:05.643 "assigned_rate_limits": { 00:08:05.643 "rw_ios_per_sec": 0, 00:08:05.643 "rw_mbytes_per_sec": 0, 00:08:05.643 "r_mbytes_per_sec": 0, 00:08:05.643 "w_mbytes_per_sec": 0 00:08:05.643 }, 00:08:05.643 "claimed": true, 00:08:05.643 "claim_type": "exclusive_write", 00:08:05.643 "zoned": false, 00:08:05.643 "supported_io_types": { 00:08:05.643 "read": true, 00:08:05.643 "write": true, 00:08:05.643 "unmap": true, 00:08:05.643 "flush": true, 00:08:05.643 "reset": true, 00:08:05.643 "nvme_admin": false, 00:08:05.643 "nvme_io": false, 00:08:05.643 "nvme_io_md": false, 00:08:05.643 "write_zeroes": true, 00:08:05.643 "zcopy": true, 00:08:05.643 "get_zone_info": false, 00:08:05.643 "zone_management": false, 00:08:05.643 "zone_append": false, 00:08:05.643 "compare": false, 00:08:05.643 "compare_and_write": false, 00:08:05.643 "abort": true, 00:08:05.643 "seek_hole": false, 00:08:05.643 "seek_data": false, 00:08:05.643 "copy": true, 00:08:05.643 "nvme_iov_md": false 00:08:05.643 }, 00:08:05.643 "memory_domains": [ 00:08:05.643 { 00:08:05.643 "dma_device_id": "system", 00:08:05.643 "dma_device_type": 1 00:08:05.643 }, 00:08:05.643 { 00:08:05.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.643 "dma_device_type": 2 00:08:05.643 } 00:08:05.643 ], 00:08:05.643 "driver_specific": {} 00:08:05.643 }, 00:08:05.643 { 00:08:05.643 "name": "Passthru0", 00:08:05.643 "aliases": [ 00:08:05.643 "ff577b7c-6ead-5a80-8332-d3363096e581" 00:08:05.643 ], 00:08:05.643 "product_name": "passthru", 00:08:05.643 "block_size": 512, 00:08:05.643 "num_blocks": 16384, 00:08:05.643 "uuid": "ff577b7c-6ead-5a80-8332-d3363096e581", 00:08:05.643 "assigned_rate_limits": { 00:08:05.643 "rw_ios_per_sec": 0, 00:08:05.643 "rw_mbytes_per_sec": 0, 00:08:05.643 "r_mbytes_per_sec": 0, 00:08:05.643 "w_mbytes_per_sec": 0 00:08:05.643 }, 00:08:05.643 "claimed": false, 00:08:05.643 "zoned": false, 00:08:05.643 "supported_io_types": { 00:08:05.643 "read": true, 00:08:05.643 "write": true, 00:08:05.643 "unmap": true, 00:08:05.643 "flush": true, 00:08:05.643 "reset": true, 00:08:05.643 "nvme_admin": false, 00:08:05.643 "nvme_io": false, 00:08:05.643 "nvme_io_md": false, 00:08:05.643 "write_zeroes": true, 00:08:05.643 "zcopy": true, 00:08:05.643 "get_zone_info": false, 00:08:05.643 "zone_management": false, 00:08:05.643 "zone_append": false, 00:08:05.643 "compare": false, 00:08:05.643 "compare_and_write": false, 00:08:05.643 "abort": true, 00:08:05.643 "seek_hole": false, 00:08:05.643 "seek_data": false, 00:08:05.643 "copy": true, 00:08:05.643 "nvme_iov_md": false 00:08:05.643 }, 00:08:05.643 "memory_domains": [ 00:08:05.643 { 00:08:05.643 "dma_device_id": "system", 00:08:05.643 "dma_device_type": 1 00:08:05.643 }, 00:08:05.643 { 00:08:05.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.643 "dma_device_type": 2 00:08:05.643 } 00:08:05.643 ], 00:08:05.643 "driver_specific": { 00:08:05.643 "passthru": { 00:08:05.643 "name": "Passthru0", 00:08:05.643 "base_bdev_name": "Malloc2" 00:08:05.643 } 00:08:05.643 } 00:08:05.643 } 00:08:05.643 ]' 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:05.643 00:08:05.643 real 0m0.272s 00:08:05.643 user 0m0.165s 00:08:05.643 sys 0m0.041s 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.643 13:39:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.643 ************************************ 00:08:05.643 END TEST rpc_daemon_integrity 00:08:05.643 ************************************ 00:08:05.643 13:39:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:05.643 13:39:48 rpc -- rpc/rpc.sh@84 -- # killprocess 470778 00:08:05.643 13:39:48 rpc -- common/autotest_common.sh@954 -- # '[' -z 470778 ']' 00:08:05.643 13:39:48 rpc -- common/autotest_common.sh@958 -- # kill -0 470778 00:08:05.643 13:39:48 rpc -- common/autotest_common.sh@959 -- # uname 00:08:05.643 13:39:48 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.643 13:39:48 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 470778 00:08:05.902 13:39:48 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.902 13:39:48 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.902 13:39:48 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 470778' 00:08:05.902 killing process with pid 470778 00:08:05.902 13:39:48 rpc -- common/autotest_common.sh@973 -- # kill 470778 00:08:05.902 13:39:48 rpc -- common/autotest_common.sh@978 -- # wait 470778 00:08:06.160 00:08:06.160 real 0m2.087s 00:08:06.160 user 0m2.669s 00:08:06.160 sys 0m0.686s 00:08:06.160 13:39:48 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.160 13:39:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.160 ************************************ 00:08:06.160 END TEST rpc 00:08:06.160 ************************************ 00:08:06.160 13:39:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:06.160 13:39:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.160 13:39:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.160 13:39:48 -- common/autotest_common.sh@10 -- # set +x 00:08:06.160 ************************************ 00:08:06.160 START TEST skip_rpc 00:08:06.160 ************************************ 00:08:06.160 13:39:48 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:06.160 * Looking for test storage... 00:08:06.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:06.160 13:39:48 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:06.160 13:39:48 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:06.160 13:39:48 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:06.418 13:39:48 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.418 13:39:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:06.418 13:39:48 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.418 13:39:48 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:06.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.418 --rc genhtml_branch_coverage=1 00:08:06.418 --rc genhtml_function_coverage=1 00:08:06.418 --rc genhtml_legend=1 00:08:06.418 --rc geninfo_all_blocks=1 00:08:06.418 --rc geninfo_unexecuted_blocks=1 00:08:06.418 00:08:06.418 ' 00:08:06.418 13:39:48 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:06.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.418 --rc genhtml_branch_coverage=1 00:08:06.418 --rc genhtml_function_coverage=1 00:08:06.418 --rc genhtml_legend=1 00:08:06.418 --rc geninfo_all_blocks=1 00:08:06.418 --rc geninfo_unexecuted_blocks=1 00:08:06.418 00:08:06.418 ' 00:08:06.418 13:39:48 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:06.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.418 --rc genhtml_branch_coverage=1 00:08:06.418 --rc genhtml_function_coverage=1 00:08:06.418 --rc genhtml_legend=1 00:08:06.418 --rc geninfo_all_blocks=1 00:08:06.418 --rc geninfo_unexecuted_blocks=1 00:08:06.418 00:08:06.418 ' 00:08:06.418 13:39:48 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:06.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.418 --rc genhtml_branch_coverage=1 00:08:06.418 --rc genhtml_function_coverage=1 00:08:06.418 --rc genhtml_legend=1 00:08:06.418 --rc geninfo_all_blocks=1 00:08:06.418 --rc geninfo_unexecuted_blocks=1 00:08:06.418 00:08:06.418 ' 00:08:06.418 13:39:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:06.418 13:39:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:06.418 13:39:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:06.419 13:39:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.419 13:39:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.419 13:39:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.419 ************************************ 00:08:06.419 START TEST skip_rpc 00:08:06.419 ************************************ 00:08:06.419 13:39:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:06.419 13:39:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=471326 00:08:06.419 13:39:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:06.419 13:39:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:06.419 13:39:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:06.419 [2024-12-05 13:39:48.866757] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:06.419 [2024-12-05 13:39:48.866795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471326 ] 00:08:06.419 [2024-12-05 13:39:48.940158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.419 [2024-12-05 13:39:48.980319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 471326 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 471326 ']' 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 471326 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 471326 00:08:11.692 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.693 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.693 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 471326' 00:08:11.693 killing process with pid 471326 00:08:11.693 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 471326 00:08:11.693 13:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 471326 00:08:11.693 00:08:11.693 real 0m5.365s 00:08:11.693 user 0m5.114s 00:08:11.693 sys 0m0.288s 00:08:11.693 13:39:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.693 13:39:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.693 ************************************ 00:08:11.693 END TEST skip_rpc 00:08:11.693 ************************************ 00:08:11.693 13:39:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:11.693 13:39:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.693 13:39:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.693 13:39:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.693 ************************************ 00:08:11.693 START TEST skip_rpc_with_json 00:08:11.693 ************************************ 00:08:11.693 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:11.693 13:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:11.693 13:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=472266 00:08:11.693 13:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:11.693 13:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:11.693 13:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 472266 00:08:11.693 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 472266 ']' 00:08:11.693 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.693 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.693 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.693 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.693 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:11.952 [2024-12-05 13:39:54.304450] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:11.952 [2024-12-05 13:39:54.304510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472266 ] 00:08:11.952 [2024-12-05 13:39:54.377007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.952 [2024-12-05 13:39:54.419087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.211 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.211 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:12.211 13:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:12.211 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.211 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:12.211 [2024-12-05 13:39:54.641349] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:12.211 request: 00:08:12.211 { 00:08:12.211 "trtype": "tcp", 00:08:12.211 "method": "nvmf_get_transports", 00:08:12.211 "req_id": 1 00:08:12.211 } 00:08:12.211 Got JSON-RPC error response 00:08:12.211 response: 00:08:12.211 { 00:08:12.211 "code": -19, 00:08:12.211 "message": "No such device" 00:08:12.211 } 00:08:12.211 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:12.211 13:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:12.211 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.211 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:12.211 [2024-12-05 13:39:54.653455] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.211 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.211 13:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:12.211 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.211 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:12.471 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.471 13:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:12.471 { 00:08:12.471 "subsystems": [ 00:08:12.471 { 00:08:12.471 "subsystem": "fsdev", 00:08:12.471 "config": [ 00:08:12.471 { 00:08:12.471 "method": "fsdev_set_opts", 00:08:12.471 "params": { 00:08:12.471 "fsdev_io_pool_size": 65535, 00:08:12.471 "fsdev_io_cache_size": 256 00:08:12.471 } 00:08:12.471 } 00:08:12.471 ] 00:08:12.471 }, 00:08:12.471 { 00:08:12.471 "subsystem": "vfio_user_target", 00:08:12.471 "config": null 00:08:12.471 }, 00:08:12.471 { 00:08:12.471 "subsystem": "keyring", 00:08:12.471 "config": [] 00:08:12.471 }, 00:08:12.471 { 00:08:12.471 "subsystem": "iobuf", 00:08:12.471 "config": [ 00:08:12.471 { 00:08:12.471 "method": "iobuf_set_options", 00:08:12.471 "params": { 00:08:12.471 "small_pool_count": 8192, 00:08:12.471 "large_pool_count": 1024, 00:08:12.471 "small_bufsize": 8192, 00:08:12.471 "large_bufsize": 135168, 00:08:12.471 "enable_numa": false 00:08:12.471 } 00:08:12.471 } 00:08:12.471 ] 00:08:12.471 }, 00:08:12.471 { 00:08:12.471 "subsystem": "sock", 00:08:12.471 "config": [ 00:08:12.471 { 00:08:12.471 "method": "sock_set_default_impl", 00:08:12.471 "params": { 00:08:12.471 "impl_name": "posix" 00:08:12.471 } 00:08:12.471 }, 00:08:12.471 { 00:08:12.471 "method": "sock_impl_set_options", 00:08:12.471 "params": { 00:08:12.471 "impl_name": "ssl", 00:08:12.471 "recv_buf_size": 4096, 00:08:12.471 "send_buf_size": 4096, 00:08:12.471 "enable_recv_pipe": true, 00:08:12.471 "enable_quickack": false, 00:08:12.471 "enable_placement_id": 0, 00:08:12.471 "enable_zerocopy_send_server": true, 00:08:12.471 "enable_zerocopy_send_client": false, 00:08:12.471 "zerocopy_threshold": 0, 00:08:12.471 "tls_version": 0, 00:08:12.471 "enable_ktls": false 00:08:12.471 } 00:08:12.471 }, 00:08:12.471 { 00:08:12.471 "method": "sock_impl_set_options", 00:08:12.471 "params": { 00:08:12.471 "impl_name": "posix", 00:08:12.471 "recv_buf_size": 2097152, 00:08:12.471 "send_buf_size": 2097152, 00:08:12.471 "enable_recv_pipe": true, 00:08:12.471 "enable_quickack": false, 00:08:12.471 "enable_placement_id": 0, 00:08:12.471 "enable_zerocopy_send_server": true, 00:08:12.471 "enable_zerocopy_send_client": false, 00:08:12.471 "zerocopy_threshold": 0, 00:08:12.471 "tls_version": 0, 00:08:12.471 "enable_ktls": false 00:08:12.471 } 00:08:12.471 } 00:08:12.471 ] 00:08:12.471 }, 00:08:12.471 { 00:08:12.471 "subsystem": "vmd", 00:08:12.471 "config": [] 00:08:12.471 }, 00:08:12.471 { 00:08:12.471 "subsystem": "accel", 00:08:12.471 "config": [ 00:08:12.471 { 00:08:12.471 "method": "accel_set_options", 00:08:12.471 "params": { 00:08:12.471 "small_cache_size": 128, 00:08:12.471 "large_cache_size": 16, 00:08:12.471 "task_count": 2048, 00:08:12.471 "sequence_count": 2048, 00:08:12.471 "buf_count": 2048 00:08:12.471 } 00:08:12.471 } 00:08:12.471 ] 00:08:12.471 }, 00:08:12.471 { 00:08:12.471 "subsystem": "bdev", 00:08:12.471 "config": [ 00:08:12.471 { 00:08:12.471 "method": "bdev_set_options", 00:08:12.471 "params": { 00:08:12.471 "bdev_io_pool_size": 65535, 00:08:12.471 "bdev_io_cache_size": 256, 00:08:12.471 "bdev_auto_examine": true, 00:08:12.471 "iobuf_small_cache_size": 128, 00:08:12.471 "iobuf_large_cache_size": 16 00:08:12.471 } 00:08:12.471 }, 00:08:12.471 { 00:08:12.471 "method": "bdev_raid_set_options", 00:08:12.471 "params": { 00:08:12.471 "process_window_size_kb": 1024, 00:08:12.471 "process_max_bandwidth_mb_sec": 0 00:08:12.471 } 00:08:12.471 }, 00:08:12.471 { 00:08:12.471 "method": "bdev_iscsi_set_options", 00:08:12.471 "params": { 00:08:12.471 "timeout_sec": 30 00:08:12.471 } 00:08:12.471 }, 00:08:12.471 { 00:08:12.471 "method": "bdev_nvme_set_options", 00:08:12.471 "params": { 00:08:12.471 "action_on_timeout": "none", 00:08:12.471 "timeout_us": 0, 00:08:12.471 "timeout_admin_us": 0, 00:08:12.471 "keep_alive_timeout_ms": 10000, 00:08:12.471 "arbitration_burst": 0, 00:08:12.471 "low_priority_weight": 0, 00:08:12.471 "medium_priority_weight": 0, 00:08:12.471 "high_priority_weight": 0, 00:08:12.471 "nvme_adminq_poll_period_us": 10000, 00:08:12.471 "nvme_ioq_poll_period_us": 0, 00:08:12.471 "io_queue_requests": 0, 00:08:12.471 "delay_cmd_submit": true, 00:08:12.471 "transport_retry_count": 4, 00:08:12.471 "bdev_retry_count": 3, 00:08:12.471 "transport_ack_timeout": 0, 00:08:12.471 "ctrlr_loss_timeout_sec": 0, 00:08:12.471 "reconnect_delay_sec": 0, 00:08:12.471 "fast_io_fail_timeout_sec": 0, 00:08:12.471 "disable_auto_failback": false, 00:08:12.471 "generate_uuids": false, 00:08:12.471 "transport_tos": 0, 00:08:12.471 "nvme_error_stat": false, 00:08:12.471 "rdma_srq_size": 0, 00:08:12.471 "io_path_stat": false, 00:08:12.471 "allow_accel_sequence": false, 00:08:12.471 "rdma_max_cq_size": 0, 00:08:12.471 "rdma_cm_event_timeout_ms": 0, 00:08:12.471 "dhchap_digests": [ 00:08:12.471 "sha256", 00:08:12.471 "sha384", 00:08:12.472 "sha512" 00:08:12.472 ], 00:08:12.472 "dhchap_dhgroups": [ 00:08:12.472 "null", 00:08:12.472 "ffdhe2048", 00:08:12.472 "ffdhe3072", 00:08:12.472 "ffdhe4096", 00:08:12.472 "ffdhe6144", 00:08:12.472 "ffdhe8192" 00:08:12.472 ] 00:08:12.472 } 00:08:12.472 }, 00:08:12.472 { 00:08:12.472 "method": "bdev_nvme_set_hotplug", 00:08:12.472 "params": { 00:08:12.472 "period_us": 100000, 00:08:12.472 "enable": false 00:08:12.472 } 00:08:12.472 }, 00:08:12.472 { 00:08:12.472 "method": "bdev_wait_for_examine" 00:08:12.472 } 00:08:12.472 ] 00:08:12.472 }, 00:08:12.472 { 00:08:12.472 "subsystem": "scsi", 00:08:12.472 "config": null 00:08:12.472 }, 00:08:12.472 { 00:08:12.472 "subsystem": "scheduler", 00:08:12.472 "config": [ 00:08:12.472 { 00:08:12.472 "method": "framework_set_scheduler", 00:08:12.472 "params": { 00:08:12.472 "name": "static" 00:08:12.472 } 00:08:12.472 } 00:08:12.472 ] 00:08:12.472 }, 00:08:12.472 { 00:08:12.472 "subsystem": "vhost_scsi", 00:08:12.472 "config": [] 00:08:12.472 }, 00:08:12.472 { 00:08:12.472 "subsystem": "vhost_blk", 00:08:12.472 "config": [] 00:08:12.472 }, 00:08:12.472 { 00:08:12.472 "subsystem": "ublk", 00:08:12.472 "config": [] 00:08:12.472 }, 00:08:12.472 { 00:08:12.472 "subsystem": "nbd", 00:08:12.472 "config": [] 00:08:12.472 }, 00:08:12.472 { 00:08:12.472 "subsystem": "nvmf", 00:08:12.472 "config": [ 00:08:12.472 { 00:08:12.472 "method": "nvmf_set_config", 00:08:12.472 "params": { 00:08:12.472 "discovery_filter": "match_any", 00:08:12.472 "admin_cmd_passthru": { 00:08:12.472 "identify_ctrlr": false 00:08:12.472 }, 00:08:12.472 "dhchap_digests": [ 00:08:12.472 "sha256", 00:08:12.472 "sha384", 00:08:12.472 "sha512" 00:08:12.472 ], 00:08:12.472 "dhchap_dhgroups": [ 00:08:12.472 "null", 00:08:12.472 "ffdhe2048", 00:08:12.472 "ffdhe3072", 00:08:12.472 "ffdhe4096", 00:08:12.472 "ffdhe6144", 00:08:12.472 "ffdhe8192" 00:08:12.472 ] 00:08:12.472 } 00:08:12.472 }, 00:08:12.472 { 00:08:12.472 "method": "nvmf_set_max_subsystems", 00:08:12.472 "params": { 00:08:12.472 "max_subsystems": 1024 00:08:12.472 } 00:08:12.472 }, 00:08:12.472 { 00:08:12.472 "method": "nvmf_set_crdt", 00:08:12.472 "params": { 00:08:12.472 "crdt1": 0, 00:08:12.472 "crdt2": 0, 00:08:12.472 "crdt3": 0 00:08:12.472 } 00:08:12.472 }, 00:08:12.472 { 00:08:12.472 "method": "nvmf_create_transport", 00:08:12.472 "params": { 00:08:12.472 "trtype": "TCP", 00:08:12.472 "max_queue_depth": 128, 00:08:12.472 "max_io_qpairs_per_ctrlr": 127, 00:08:12.472 "in_capsule_data_size": 4096, 00:08:12.472 "max_io_size": 131072, 00:08:12.472 "io_unit_size": 131072, 00:08:12.472 "max_aq_depth": 128, 00:08:12.472 "num_shared_buffers": 511, 00:08:12.472 "buf_cache_size": 4294967295, 00:08:12.472 "dif_insert_or_strip": false, 00:08:12.472 "zcopy": false, 00:08:12.472 "c2h_success": true, 00:08:12.472 "sock_priority": 0, 00:08:12.472 "abort_timeout_sec": 1, 00:08:12.472 "ack_timeout": 0, 00:08:12.472 "data_wr_pool_size": 0 00:08:12.472 } 00:08:12.472 } 00:08:12.472 ] 00:08:12.472 }, 00:08:12.472 { 00:08:12.472 "subsystem": "iscsi", 00:08:12.472 "config": [ 00:08:12.472 { 00:08:12.472 "method": "iscsi_set_options", 00:08:12.472 "params": { 00:08:12.472 "node_base": "iqn.2016-06.io.spdk", 00:08:12.472 "max_sessions": 128, 00:08:12.472 "max_connections_per_session": 2, 00:08:12.472 "max_queue_depth": 64, 00:08:12.472 "default_time2wait": 2, 00:08:12.472 "default_time2retain": 20, 00:08:12.472 "first_burst_length": 8192, 00:08:12.472 "immediate_data": true, 00:08:12.472 "allow_duplicated_isid": false, 00:08:12.472 "error_recovery_level": 0, 00:08:12.472 "nop_timeout": 60, 00:08:12.472 "nop_in_interval": 30, 00:08:12.472 "disable_chap": false, 00:08:12.472 "require_chap": false, 00:08:12.472 "mutual_chap": false, 00:08:12.472 "chap_group": 0, 00:08:12.472 "max_large_datain_per_connection": 64, 00:08:12.472 "max_r2t_per_connection": 4, 00:08:12.472 "pdu_pool_size": 36864, 00:08:12.472 "immediate_data_pool_size": 16384, 00:08:12.472 "data_out_pool_size": 2048 00:08:12.472 } 00:08:12.472 } 00:08:12.472 ] 00:08:12.472 } 00:08:12.472 ] 00:08:12.472 } 00:08:12.472 13:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:12.472 13:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 472266 00:08:12.472 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 472266 ']' 00:08:12.472 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 472266 00:08:12.472 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:12.472 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.472 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 472266 00:08:12.472 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.472 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.472 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 472266' 00:08:12.472 killing process with pid 472266 00:08:12.472 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 472266 00:08:12.472 13:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 472266 00:08:12.730 13:39:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=472387 00:08:12.730 13:39:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:12.730 13:39:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 472387 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 472387 ']' 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 472387 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 472387 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 472387' 00:08:18.055 killing process with pid 472387 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 472387 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 472387 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:18.055 00:08:18.055 real 0m6.286s 00:08:18.055 user 0m5.959s 00:08:18.055 sys 0m0.615s 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:18.055 ************************************ 00:08:18.055 END TEST skip_rpc_with_json 00:08:18.055 ************************************ 00:08:18.055 13:40:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:18.055 13:40:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.055 13:40:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.055 13:40:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.055 ************************************ 00:08:18.055 START TEST skip_rpc_with_delay 00:08:18.055 ************************************ 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:18.055 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:18.313 [2024-12-05 13:40:00.658232] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:18.313 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:18.313 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:18.313 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:18.313 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:18.313 00:08:18.313 real 0m0.069s 00:08:18.313 user 0m0.042s 00:08:18.313 sys 0m0.027s 00:08:18.313 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.313 13:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:18.313 ************************************ 00:08:18.313 END TEST skip_rpc_with_delay 00:08:18.313 ************************************ 00:08:18.313 13:40:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:18.313 13:40:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:18.313 13:40:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:18.313 13:40:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.313 13:40:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.313 13:40:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.313 ************************************ 00:08:18.313 START TEST exit_on_failed_rpc_init 00:08:18.313 ************************************ 00:08:18.313 13:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:18.313 13:40:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=473360 00:08:18.313 13:40:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 473360 00:08:18.313 13:40:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:18.313 13:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 473360 ']' 00:08:18.313 13:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.313 13:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.313 13:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.313 13:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.313 13:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:18.313 [2024-12-05 13:40:00.793446] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:18.313 [2024-12-05 13:40:00.793490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473360 ] 00:08:18.313 [2024-12-05 13:40:00.867156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.571 [2024-12-05 13:40:00.909945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:18.571 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:18.834 [2024-12-05 13:40:01.186150] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:18.834 [2024-12-05 13:40:01.186196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473505 ] 00:08:18.834 [2024-12-05 13:40:01.257110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.834 [2024-12-05 13:40:01.297429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.834 [2024-12-05 13:40:01.297488] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:18.834 [2024-12-05 13:40:01.297496] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:18.834 [2024-12-05 13:40:01.297503] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 473360 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 473360 ']' 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 473360 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 473360 00:08:18.834 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.835 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.835 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 473360' 00:08:18.835 killing process with pid 473360 00:08:18.835 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 473360 00:08:18.835 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 473360 00:08:19.401 00:08:19.401 real 0m0.948s 00:08:19.401 user 0m1.012s 00:08:19.401 sys 0m0.381s 00:08:19.401 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.401 13:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:19.401 ************************************ 00:08:19.401 END TEST exit_on_failed_rpc_init 00:08:19.401 ************************************ 00:08:19.401 13:40:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:19.401 00:08:19.401 real 0m13.120s 00:08:19.401 user 0m12.338s 00:08:19.401 sys 0m1.583s 00:08:19.401 13:40:01 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.401 13:40:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.401 ************************************ 00:08:19.401 END TEST skip_rpc 00:08:19.401 ************************************ 00:08:19.401 13:40:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:19.401 13:40:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.401 13:40:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.401 13:40:01 -- common/autotest_common.sh@10 -- # set +x 00:08:19.401 ************************************ 00:08:19.401 START TEST rpc_client 00:08:19.401 ************************************ 00:08:19.401 13:40:01 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:19.401 * Looking for test storage... 00:08:19.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:19.401 13:40:01 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:19.401 13:40:01 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:08:19.401 13:40:01 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:19.401 13:40:01 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.401 13:40:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:19.401 13:40:01 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.401 13:40:01 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:19.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.401 --rc genhtml_branch_coverage=1 00:08:19.401 --rc genhtml_function_coverage=1 00:08:19.401 --rc genhtml_legend=1 00:08:19.401 --rc geninfo_all_blocks=1 00:08:19.402 --rc geninfo_unexecuted_blocks=1 00:08:19.402 00:08:19.402 ' 00:08:19.402 13:40:01 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:19.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.402 --rc genhtml_branch_coverage=1 00:08:19.402 --rc genhtml_function_coverage=1 00:08:19.402 --rc genhtml_legend=1 00:08:19.402 --rc geninfo_all_blocks=1 00:08:19.402 --rc geninfo_unexecuted_blocks=1 00:08:19.402 00:08:19.402 ' 00:08:19.402 13:40:01 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:19.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.402 --rc genhtml_branch_coverage=1 00:08:19.402 --rc genhtml_function_coverage=1 00:08:19.402 --rc genhtml_legend=1 00:08:19.402 --rc geninfo_all_blocks=1 00:08:19.402 --rc geninfo_unexecuted_blocks=1 00:08:19.402 00:08:19.402 ' 00:08:19.402 13:40:01 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:19.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.402 --rc genhtml_branch_coverage=1 00:08:19.402 --rc genhtml_function_coverage=1 00:08:19.402 --rc genhtml_legend=1 00:08:19.402 --rc geninfo_all_blocks=1 00:08:19.402 --rc geninfo_unexecuted_blocks=1 00:08:19.402 00:08:19.402 ' 00:08:19.402 13:40:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:19.661 OK 00:08:19.661 13:40:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:19.661 00:08:19.661 real 0m0.200s 00:08:19.661 user 0m0.122s 00:08:19.661 sys 0m0.092s 00:08:19.661 13:40:01 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.661 13:40:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:19.661 ************************************ 00:08:19.661 END TEST rpc_client 00:08:19.661 ************************************ 00:08:19.661 13:40:02 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:19.661 13:40:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.661 13:40:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.661 13:40:02 -- common/autotest_common.sh@10 -- # set +x 00:08:19.661 ************************************ 00:08:19.661 START TEST json_config 00:08:19.661 ************************************ 00:08:19.661 13:40:02 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:19.661 13:40:02 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:19.661 13:40:02 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:08:19.661 13:40:02 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:19.661 13:40:02 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:19.661 13:40:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.661 13:40:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.661 13:40:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.661 13:40:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.661 13:40:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.661 13:40:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.661 13:40:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.661 13:40:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.661 13:40:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.661 13:40:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.661 13:40:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.661 13:40:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:19.661 13:40:02 json_config -- scripts/common.sh@345 -- # : 1 00:08:19.661 13:40:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.661 13:40:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.661 13:40:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:19.661 13:40:02 json_config -- scripts/common.sh@353 -- # local d=1 00:08:19.661 13:40:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.661 13:40:02 json_config -- scripts/common.sh@355 -- # echo 1 00:08:19.661 13:40:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.661 13:40:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:19.661 13:40:02 json_config -- scripts/common.sh@353 -- # local d=2 00:08:19.661 13:40:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.661 13:40:02 json_config -- scripts/common.sh@355 -- # echo 2 00:08:19.661 13:40:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.661 13:40:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.661 13:40:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.661 13:40:02 json_config -- scripts/common.sh@368 -- # return 0 00:08:19.661 13:40:02 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.661 13:40:02 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:19.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.661 --rc genhtml_branch_coverage=1 00:08:19.661 --rc genhtml_function_coverage=1 00:08:19.661 --rc genhtml_legend=1 00:08:19.661 --rc geninfo_all_blocks=1 00:08:19.661 --rc geninfo_unexecuted_blocks=1 00:08:19.661 00:08:19.661 ' 00:08:19.661 13:40:02 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:19.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.661 --rc genhtml_branch_coverage=1 00:08:19.661 --rc genhtml_function_coverage=1 00:08:19.661 --rc genhtml_legend=1 00:08:19.661 --rc geninfo_all_blocks=1 00:08:19.661 --rc geninfo_unexecuted_blocks=1 00:08:19.661 00:08:19.661 ' 00:08:19.661 13:40:02 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:19.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.661 --rc genhtml_branch_coverage=1 00:08:19.661 --rc genhtml_function_coverage=1 00:08:19.661 --rc genhtml_legend=1 00:08:19.661 --rc geninfo_all_blocks=1 00:08:19.661 --rc geninfo_unexecuted_blocks=1 00:08:19.661 00:08:19.661 ' 00:08:19.661 13:40:02 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:19.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.661 --rc genhtml_branch_coverage=1 00:08:19.661 --rc genhtml_function_coverage=1 00:08:19.661 --rc genhtml_legend=1 00:08:19.661 --rc geninfo_all_blocks=1 00:08:19.661 --rc geninfo_unexecuted_blocks=1 00:08:19.661 00:08:19.661 ' 00:08:19.661 13:40:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.661 13:40:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.662 13:40:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:19.662 13:40:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.662 13:40:02 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.662 13:40:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.662 13:40:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.662 13:40:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.662 13:40:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.662 13:40:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.662 13:40:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.662 13:40:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.662 13:40:02 json_config -- paths/export.sh@5 -- # export PATH 00:08:19.662 13:40:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.662 13:40:02 json_config -- nvmf/common.sh@51 -- # : 0 00:08:19.662 13:40:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.662 13:40:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.662 13:40:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.662 13:40:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.662 13:40:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.662 13:40:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.662 13:40:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.662 13:40:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.662 13:40:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.662 13:40:02 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:19.662 13:40:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:19.662 13:40:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:19.662 13:40:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:19.662 13:40:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:19.662 13:40:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:19.662 13:40:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:19.662 13:40:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:19.662 13:40:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:19.921 13:40:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:19.921 13:40:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:19.921 13:40:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:19.921 13:40:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:19.921 13:40:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:19.921 13:40:02 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:19.921 13:40:02 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:19.921 INFO: JSON configuration test init 00:08:19.921 13:40:02 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:19.921 13:40:02 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:19.921 13:40:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.921 13:40:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:19.921 13:40:02 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:19.921 13:40:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.921 13:40:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:19.921 13:40:02 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:19.921 13:40:02 json_config -- json_config/common.sh@9 -- # local app=target 00:08:19.921 13:40:02 json_config -- json_config/common.sh@10 -- # shift 00:08:19.921 13:40:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:19.921 13:40:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:19.921 13:40:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:19.921 13:40:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:19.921 13:40:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:19.921 13:40:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=473731 00:08:19.921 13:40:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:19.921 Waiting for target to run... 00:08:19.921 13:40:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:19.921 13:40:02 json_config -- json_config/common.sh@25 -- # waitforlisten 473731 /var/tmp/spdk_tgt.sock 00:08:19.921 13:40:02 json_config -- common/autotest_common.sh@835 -- # '[' -z 473731 ']' 00:08:19.921 13:40:02 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:19.921 13:40:02 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.921 13:40:02 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:19.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:19.921 13:40:02 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.921 13:40:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:19.921 [2024-12-05 13:40:02.317195] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:19.921 [2024-12-05 13:40:02.317247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473731 ] 00:08:20.487 [2024-12-05 13:40:02.783490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.487 [2024-12-05 13:40:02.842234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.746 13:40:03 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.746 13:40:03 json_config -- common/autotest_common.sh@868 -- # return 0 00:08:20.746 13:40:03 json_config -- json_config/common.sh@26 -- # echo '' 00:08:20.746 00:08:20.746 13:40:03 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:20.746 13:40:03 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:20.746 13:40:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.746 13:40:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:20.746 13:40:03 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:20.746 13:40:03 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:20.746 13:40:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.746 13:40:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:20.746 13:40:03 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:20.746 13:40:03 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:20.746 13:40:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:24.030 13:40:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.030 13:40:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:24.030 13:40:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@54 -- # sort 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:24.030 13:40:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.030 13:40:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:24.030 13:40:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.030 13:40:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:24.030 13:40:06 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:24.031 13:40:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:24.289 MallocForNvmf0 00:08:24.289 13:40:06 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:24.289 13:40:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:24.549 MallocForNvmf1 00:08:24.549 13:40:06 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:24.549 13:40:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:24.549 [2024-12-05 13:40:07.053363] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.549 13:40:07 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:24.549 13:40:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:24.808 13:40:07 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:24.808 13:40:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:25.066 13:40:07 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:25.066 13:40:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:25.066 13:40:07 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:25.066 13:40:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:25.397 [2024-12-05 13:40:07.775639] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:25.397 13:40:07 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:08:25.397 13:40:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:25.397 13:40:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.397 13:40:07 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:08:25.397 13:40:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:25.397 13:40:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.397 13:40:07 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:08:25.397 13:40:07 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:25.397 13:40:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:25.670 MallocBdevForConfigChangeCheck 00:08:25.670 13:40:08 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:08:25.670 13:40:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:25.670 13:40:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.670 13:40:08 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:08:25.670 13:40:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:25.929 13:40:08 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:08:25.929 INFO: shutting down applications... 00:08:25.929 13:40:08 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:08:25.929 13:40:08 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:08:25.929 13:40:08 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:08:25.929 13:40:08 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:28.461 Calling clear_iscsi_subsystem 00:08:28.461 Calling clear_nvmf_subsystem 00:08:28.461 Calling clear_nbd_subsystem 00:08:28.461 Calling clear_ublk_subsystem 00:08:28.461 Calling clear_vhost_blk_subsystem 00:08:28.461 Calling clear_vhost_scsi_subsystem 00:08:28.461 Calling clear_bdev_subsystem 00:08:28.461 13:40:10 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:28.461 13:40:10 json_config -- json_config/json_config.sh@350 -- # count=100 00:08:28.461 13:40:10 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:08:28.461 13:40:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:28.461 13:40:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:28.461 13:40:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:28.461 13:40:10 json_config -- json_config/json_config.sh@352 -- # break 00:08:28.461 13:40:10 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:08:28.461 13:40:10 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:08:28.461 13:40:10 json_config -- json_config/common.sh@31 -- # local app=target 00:08:28.461 13:40:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:28.461 13:40:10 json_config -- json_config/common.sh@35 -- # [[ -n 473731 ]] 00:08:28.461 13:40:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 473731 00:08:28.461 13:40:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:28.461 13:40:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:28.461 13:40:10 json_config -- json_config/common.sh@41 -- # kill -0 473731 00:08:28.461 13:40:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:29.028 13:40:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:29.028 13:40:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:29.028 13:40:11 json_config -- json_config/common.sh@41 -- # kill -0 473731 00:08:29.028 13:40:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:29.028 13:40:11 json_config -- json_config/common.sh@43 -- # break 00:08:29.028 13:40:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:29.028 13:40:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:29.028 SPDK target shutdown done 00:08:29.028 13:40:11 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:29.028 INFO: relaunching applications... 00:08:29.028 13:40:11 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:29.028 13:40:11 json_config -- json_config/common.sh@9 -- # local app=target 00:08:29.028 13:40:11 json_config -- json_config/common.sh@10 -- # shift 00:08:29.028 13:40:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:29.028 13:40:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:29.028 13:40:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:29.028 13:40:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:29.028 13:40:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:29.028 13:40:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=475468 00:08:29.028 13:40:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:29.028 Waiting for target to run... 00:08:29.028 13:40:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:29.028 13:40:11 json_config -- json_config/common.sh@25 -- # waitforlisten 475468 /var/tmp/spdk_tgt.sock 00:08:29.028 13:40:11 json_config -- common/autotest_common.sh@835 -- # '[' -z 475468 ']' 00:08:29.028 13:40:11 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:29.028 13:40:11 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.028 13:40:11 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:29.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:29.028 13:40:11 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.028 13:40:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:29.028 [2024-12-05 13:40:11.437137] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:29.028 [2024-12-05 13:40:11.437190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475468 ] 00:08:29.286 [2024-12-05 13:40:11.718588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.286 [2024-12-05 13:40:11.752317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.577 [2024-12-05 13:40:14.783145] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.577 [2024-12-05 13:40:14.815511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:32.577 13:40:14 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.577 13:40:14 json_config -- common/autotest_common.sh@868 -- # return 0 00:08:32.577 13:40:14 json_config -- json_config/common.sh@26 -- # echo '' 00:08:32.577 00:08:32.577 13:40:14 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:32.577 13:40:14 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:32.577 INFO: Checking if target configuration is the same... 00:08:32.577 13:40:14 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:32.577 13:40:14 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:32.577 13:40:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:32.577 + '[' 2 -ne 2 ']' 00:08:32.577 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:32.577 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:32.577 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:32.577 +++ basename /dev/fd/62 00:08:32.577 ++ mktemp /tmp/62.XXX 00:08:32.577 + tmp_file_1=/tmp/62.ypr 00:08:32.577 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:32.577 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:32.577 + tmp_file_2=/tmp/spdk_tgt_config.json.Cbs 00:08:32.577 + ret=0 00:08:32.577 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:32.835 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:32.835 + diff -u /tmp/62.ypr /tmp/spdk_tgt_config.json.Cbs 00:08:32.835 + echo 'INFO: JSON config files are the same' 00:08:32.835 INFO: JSON config files are the same 00:08:32.835 + rm /tmp/62.ypr /tmp/spdk_tgt_config.json.Cbs 00:08:32.835 + exit 0 00:08:32.835 13:40:15 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:32.835 13:40:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:32.835 INFO: changing configuration and checking if this can be detected... 00:08:32.835 13:40:15 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:32.835 13:40:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:33.093 13:40:15 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:33.093 13:40:15 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:33.093 13:40:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:33.093 + '[' 2 -ne 2 ']' 00:08:33.093 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:33.093 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:33.093 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:33.093 +++ basename /dev/fd/62 00:08:33.093 ++ mktemp /tmp/62.XXX 00:08:33.093 + tmp_file_1=/tmp/62.fWf 00:08:33.093 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:33.093 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:33.093 + tmp_file_2=/tmp/spdk_tgt_config.json.Vok 00:08:33.093 + ret=0 00:08:33.093 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:33.350 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:33.350 + diff -u /tmp/62.fWf /tmp/spdk_tgt_config.json.Vok 00:08:33.350 + ret=1 00:08:33.350 + echo '=== Start of file: /tmp/62.fWf ===' 00:08:33.350 + cat /tmp/62.fWf 00:08:33.350 + echo '=== End of file: /tmp/62.fWf ===' 00:08:33.350 + echo '' 00:08:33.351 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Vok ===' 00:08:33.351 + cat /tmp/spdk_tgt_config.json.Vok 00:08:33.351 + echo '=== End of file: /tmp/spdk_tgt_config.json.Vok ===' 00:08:33.351 + echo '' 00:08:33.351 + rm /tmp/62.fWf /tmp/spdk_tgt_config.json.Vok 00:08:33.351 + exit 1 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:33.351 INFO: configuration change detected. 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:33.351 13:40:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.351 13:40:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@324 -- # [[ -n 475468 ]] 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:33.351 13:40:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.351 13:40:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:33.351 13:40:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:33.351 13:40:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:33.351 13:40:15 json_config -- json_config/json_config.sh@330 -- # killprocess 475468 00:08:33.351 13:40:15 json_config -- common/autotest_common.sh@954 -- # '[' -z 475468 ']' 00:08:33.351 13:40:15 json_config -- common/autotest_common.sh@958 -- # kill -0 475468 00:08:33.351 13:40:15 json_config -- common/autotest_common.sh@959 -- # uname 00:08:33.351 13:40:15 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.351 13:40:15 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 475468 00:08:33.609 13:40:15 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.609 13:40:15 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.609 13:40:15 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 475468' 00:08:33.609 killing process with pid 475468 00:08:33.609 13:40:15 json_config -- common/autotest_common.sh@973 -- # kill 475468 00:08:33.609 13:40:15 json_config -- common/autotest_common.sh@978 -- # wait 475468 00:08:35.511 13:40:17 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:35.511 13:40:17 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:35.511 13:40:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:35.511 13:40:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.511 13:40:17 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:35.511 13:40:17 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:35.511 INFO: Success 00:08:35.511 00:08:35.511 real 0m15.930s 00:08:35.511 user 0m16.294s 00:08:35.511 sys 0m2.522s 00:08:35.511 13:40:17 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.511 13:40:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.511 ************************************ 00:08:35.511 END TEST json_config 00:08:35.511 ************************************ 00:08:35.511 13:40:18 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:35.511 13:40:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.511 13:40:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.511 13:40:18 -- common/autotest_common.sh@10 -- # set +x 00:08:35.511 ************************************ 00:08:35.511 START TEST json_config_extra_key 00:08:35.511 ************************************ 00:08:35.511 13:40:18 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:35.771 13:40:18 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:35.771 13:40:18 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:08:35.771 13:40:18 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:35.771 13:40:18 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:35.771 13:40:18 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.771 13:40:18 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:35.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.771 --rc genhtml_branch_coverage=1 00:08:35.771 --rc genhtml_function_coverage=1 00:08:35.771 --rc genhtml_legend=1 00:08:35.771 --rc geninfo_all_blocks=1 00:08:35.771 --rc geninfo_unexecuted_blocks=1 00:08:35.771 00:08:35.771 ' 00:08:35.771 13:40:18 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:35.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.771 --rc genhtml_branch_coverage=1 00:08:35.771 --rc genhtml_function_coverage=1 00:08:35.771 --rc genhtml_legend=1 00:08:35.771 --rc geninfo_all_blocks=1 00:08:35.771 --rc geninfo_unexecuted_blocks=1 00:08:35.771 00:08:35.771 ' 00:08:35.771 13:40:18 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:35.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.771 --rc genhtml_branch_coverage=1 00:08:35.771 --rc genhtml_function_coverage=1 00:08:35.771 --rc genhtml_legend=1 00:08:35.771 --rc geninfo_all_blocks=1 00:08:35.771 --rc geninfo_unexecuted_blocks=1 00:08:35.771 00:08:35.771 ' 00:08:35.771 13:40:18 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:35.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.771 --rc genhtml_branch_coverage=1 00:08:35.771 --rc genhtml_function_coverage=1 00:08:35.771 --rc genhtml_legend=1 00:08:35.771 --rc geninfo_all_blocks=1 00:08:35.771 --rc geninfo_unexecuted_blocks=1 00:08:35.771 00:08:35.771 ' 00:08:35.771 13:40:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.771 13:40:18 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.771 13:40:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.771 13:40:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.771 13:40:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.771 13:40:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:35.771 13:40:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.771 13:40:18 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.771 13:40:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:35.771 13:40:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:35.771 13:40:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:35.772 13:40:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:35.772 13:40:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:35.772 13:40:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:35.772 13:40:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:35.772 13:40:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:35.772 13:40:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:35.772 13:40:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:35.772 13:40:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:35.772 INFO: launching applications... 00:08:35.772 13:40:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:35.772 13:40:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:35.772 13:40:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:35.772 13:40:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:35.772 13:40:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:35.772 13:40:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:35.772 13:40:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:35.772 13:40:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:35.772 13:40:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=476745 00:08:35.772 13:40:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:35.772 Waiting for target to run... 00:08:35.772 13:40:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 476745 /var/tmp/spdk_tgt.sock 00:08:35.772 13:40:18 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 476745 ']' 00:08:35.772 13:40:18 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:35.772 13:40:18 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:35.772 13:40:18 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.772 13:40:18 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:35.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:35.772 13:40:18 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.772 13:40:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:35.772 [2024-12-05 13:40:18.303607] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:35.772 [2024-12-05 13:40:18.303657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476745 ] 00:08:36.031 [2024-12-05 13:40:18.586639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.289 [2024-12-05 13:40:18.621036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.547 13:40:19 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.547 13:40:19 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:36.547 13:40:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:36.547 00:08:36.547 13:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:36.547 INFO: shutting down applications... 00:08:36.547 13:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:36.547 13:40:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:36.547 13:40:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:36.547 13:40:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 476745 ]] 00:08:36.547 13:40:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 476745 00:08:36.547 13:40:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:36.547 13:40:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:36.547 13:40:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 476745 00:08:36.547 13:40:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:37.115 13:40:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:37.115 13:40:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:37.115 13:40:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 476745 00:08:37.115 13:40:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:37.115 13:40:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:37.115 13:40:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:37.115 13:40:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:37.115 SPDK target shutdown done 00:08:37.115 13:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:37.115 Success 00:08:37.115 00:08:37.115 real 0m1.564s 00:08:37.115 user 0m1.341s 00:08:37.115 sys 0m0.392s 00:08:37.115 13:40:19 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.115 13:40:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:37.115 ************************************ 00:08:37.115 END TEST json_config_extra_key 00:08:37.115 ************************************ 00:08:37.115 13:40:19 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:37.115 13:40:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.115 13:40:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.115 13:40:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.374 ************************************ 00:08:37.374 START TEST alias_rpc 00:08:37.374 ************************************ 00:08:37.374 13:40:19 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:37.374 * Looking for test storage... 00:08:37.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:37.374 13:40:19 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.374 13:40:19 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.374 13:40:19 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.374 13:40:19 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.375 13:40:19 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:37.375 13:40:19 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.375 13:40:19 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.375 --rc genhtml_branch_coverage=1 00:08:37.375 --rc genhtml_function_coverage=1 00:08:37.375 --rc genhtml_legend=1 00:08:37.375 --rc geninfo_all_blocks=1 00:08:37.375 --rc geninfo_unexecuted_blocks=1 00:08:37.375 00:08:37.375 ' 00:08:37.375 13:40:19 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.375 --rc genhtml_branch_coverage=1 00:08:37.375 --rc genhtml_function_coverage=1 00:08:37.375 --rc genhtml_legend=1 00:08:37.375 --rc geninfo_all_blocks=1 00:08:37.375 --rc geninfo_unexecuted_blocks=1 00:08:37.375 00:08:37.375 ' 00:08:37.375 13:40:19 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.375 --rc genhtml_branch_coverage=1 00:08:37.375 --rc genhtml_function_coverage=1 00:08:37.375 --rc genhtml_legend=1 00:08:37.375 --rc geninfo_all_blocks=1 00:08:37.375 --rc geninfo_unexecuted_blocks=1 00:08:37.375 00:08:37.375 ' 00:08:37.375 13:40:19 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.375 --rc genhtml_branch_coverage=1 00:08:37.375 --rc genhtml_function_coverage=1 00:08:37.375 --rc genhtml_legend=1 00:08:37.375 --rc geninfo_all_blocks=1 00:08:37.375 --rc geninfo_unexecuted_blocks=1 00:08:37.375 00:08:37.375 ' 00:08:37.375 13:40:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:37.375 13:40:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=477030 00:08:37.375 13:40:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 477030 00:08:37.375 13:40:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:37.375 13:40:19 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 477030 ']' 00:08:37.375 13:40:19 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.375 13:40:19 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.375 13:40:19 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.375 13:40:19 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.375 13:40:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.375 [2024-12-05 13:40:19.926080] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:37.375 [2024-12-05 13:40:19.926127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477030 ] 00:08:37.634 [2024-12-05 13:40:20.000003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.634 [2024-12-05 13:40:20.047808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.892 13:40:20 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.892 13:40:20 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:37.892 13:40:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:38.151 13:40:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 477030 00:08:38.151 13:40:20 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 477030 ']' 00:08:38.151 13:40:20 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 477030 00:08:38.151 13:40:20 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:38.151 13:40:20 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.151 13:40:20 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 477030 00:08:38.151 13:40:20 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.151 13:40:20 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.151 13:40:20 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 477030' 00:08:38.151 killing process with pid 477030 00:08:38.151 13:40:20 alias_rpc -- common/autotest_common.sh@973 -- # kill 477030 00:08:38.151 13:40:20 alias_rpc -- common/autotest_common.sh@978 -- # wait 477030 00:08:38.410 00:08:38.410 real 0m1.142s 00:08:38.410 user 0m1.189s 00:08:38.410 sys 0m0.403s 00:08:38.410 13:40:20 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.410 13:40:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.410 ************************************ 00:08:38.410 END TEST alias_rpc 00:08:38.410 ************************************ 00:08:38.410 13:40:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:38.410 13:40:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:38.410 13:40:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.410 13:40:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.410 13:40:20 -- common/autotest_common.sh@10 -- # set +x 00:08:38.410 ************************************ 00:08:38.410 START TEST spdkcli_tcp 00:08:38.410 ************************************ 00:08:38.410 13:40:20 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:38.410 * Looking for test storage... 00:08:38.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.670 13:40:21 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:38.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.670 --rc genhtml_branch_coverage=1 00:08:38.670 --rc genhtml_function_coverage=1 00:08:38.670 --rc genhtml_legend=1 00:08:38.670 --rc geninfo_all_blocks=1 00:08:38.670 --rc geninfo_unexecuted_blocks=1 00:08:38.670 00:08:38.670 ' 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:38.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.670 --rc genhtml_branch_coverage=1 00:08:38.670 --rc genhtml_function_coverage=1 00:08:38.670 --rc genhtml_legend=1 00:08:38.670 --rc geninfo_all_blocks=1 00:08:38.670 --rc geninfo_unexecuted_blocks=1 00:08:38.670 00:08:38.670 ' 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:38.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.670 --rc genhtml_branch_coverage=1 00:08:38.670 --rc genhtml_function_coverage=1 00:08:38.670 --rc genhtml_legend=1 00:08:38.670 --rc geninfo_all_blocks=1 00:08:38.670 --rc geninfo_unexecuted_blocks=1 00:08:38.670 00:08:38.670 ' 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:38.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.670 --rc genhtml_branch_coverage=1 00:08:38.670 --rc genhtml_function_coverage=1 00:08:38.670 --rc genhtml_legend=1 00:08:38.670 --rc geninfo_all_blocks=1 00:08:38.670 --rc geninfo_unexecuted_blocks=1 00:08:38.670 00:08:38.670 ' 00:08:38.670 13:40:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:38.670 13:40:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:38.670 13:40:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:38.670 13:40:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:38.670 13:40:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:38.670 13:40:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:38.670 13:40:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.670 13:40:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=477317 00:08:38.670 13:40:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 477317 00:08:38.670 13:40:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 477317 ']' 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.670 13:40:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.670 [2024-12-05 13:40:21.146536] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:38.670 [2024-12-05 13:40:21.146584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477317 ] 00:08:38.670 [2024-12-05 13:40:21.217950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:38.929 [2024-12-05 13:40:21.262638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.930 [2024-12-05 13:40:21.262638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.497 13:40:21 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.497 13:40:21 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:39.497 13:40:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=477550 00:08:39.497 13:40:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:39.497 13:40:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:39.756 [ 00:08:39.756 "bdev_malloc_delete", 00:08:39.756 "bdev_malloc_create", 00:08:39.756 "bdev_null_resize", 00:08:39.756 "bdev_null_delete", 00:08:39.756 "bdev_null_create", 00:08:39.756 "bdev_nvme_cuse_unregister", 00:08:39.756 "bdev_nvme_cuse_register", 00:08:39.756 "bdev_opal_new_user", 00:08:39.756 "bdev_opal_set_lock_state", 00:08:39.756 "bdev_opal_delete", 00:08:39.756 "bdev_opal_get_info", 00:08:39.756 "bdev_opal_create", 00:08:39.756 "bdev_nvme_opal_revert", 00:08:39.756 "bdev_nvme_opal_init", 00:08:39.756 "bdev_nvme_send_cmd", 00:08:39.757 "bdev_nvme_set_keys", 00:08:39.757 "bdev_nvme_get_path_iostat", 00:08:39.757 "bdev_nvme_get_mdns_discovery_info", 00:08:39.757 "bdev_nvme_stop_mdns_discovery", 00:08:39.757 "bdev_nvme_start_mdns_discovery", 00:08:39.757 "bdev_nvme_set_multipath_policy", 00:08:39.757 "bdev_nvme_set_preferred_path", 00:08:39.757 "bdev_nvme_get_io_paths", 00:08:39.757 "bdev_nvme_remove_error_injection", 00:08:39.757 "bdev_nvme_add_error_injection", 00:08:39.757 "bdev_nvme_get_discovery_info", 00:08:39.757 "bdev_nvme_stop_discovery", 00:08:39.757 "bdev_nvme_start_discovery", 00:08:39.757 "bdev_nvme_get_controller_health_info", 00:08:39.757 "bdev_nvme_disable_controller", 00:08:39.757 "bdev_nvme_enable_controller", 00:08:39.757 "bdev_nvme_reset_controller", 00:08:39.757 "bdev_nvme_get_transport_statistics", 00:08:39.757 "bdev_nvme_apply_firmware", 00:08:39.757 "bdev_nvme_detach_controller", 00:08:39.757 "bdev_nvme_get_controllers", 00:08:39.757 "bdev_nvme_attach_controller", 00:08:39.757 "bdev_nvme_set_hotplug", 00:08:39.757 "bdev_nvme_set_options", 00:08:39.757 "bdev_passthru_delete", 00:08:39.757 "bdev_passthru_create", 00:08:39.757 "bdev_lvol_set_parent_bdev", 00:08:39.757 "bdev_lvol_set_parent", 00:08:39.757 "bdev_lvol_check_shallow_copy", 00:08:39.757 "bdev_lvol_start_shallow_copy", 00:08:39.757 "bdev_lvol_grow_lvstore", 00:08:39.757 "bdev_lvol_get_lvols", 00:08:39.757 "bdev_lvol_get_lvstores", 00:08:39.757 "bdev_lvol_delete", 00:08:39.757 "bdev_lvol_set_read_only", 00:08:39.757 "bdev_lvol_resize", 00:08:39.757 "bdev_lvol_decouple_parent", 00:08:39.757 "bdev_lvol_inflate", 00:08:39.757 "bdev_lvol_rename", 00:08:39.757 "bdev_lvol_clone_bdev", 00:08:39.757 "bdev_lvol_clone", 00:08:39.757 "bdev_lvol_snapshot", 00:08:39.757 "bdev_lvol_create", 00:08:39.757 "bdev_lvol_delete_lvstore", 00:08:39.757 "bdev_lvol_rename_lvstore", 00:08:39.757 "bdev_lvol_create_lvstore", 00:08:39.757 "bdev_raid_set_options", 00:08:39.757 "bdev_raid_remove_base_bdev", 00:08:39.757 "bdev_raid_add_base_bdev", 00:08:39.757 "bdev_raid_delete", 00:08:39.757 "bdev_raid_create", 00:08:39.757 "bdev_raid_get_bdevs", 00:08:39.757 "bdev_error_inject_error", 00:08:39.757 "bdev_error_delete", 00:08:39.757 "bdev_error_create", 00:08:39.757 "bdev_split_delete", 00:08:39.757 "bdev_split_create", 00:08:39.757 "bdev_delay_delete", 00:08:39.757 "bdev_delay_create", 00:08:39.757 "bdev_delay_update_latency", 00:08:39.757 "bdev_zone_block_delete", 00:08:39.757 "bdev_zone_block_create", 00:08:39.757 "blobfs_create", 00:08:39.757 "blobfs_detect", 00:08:39.757 "blobfs_set_cache_size", 00:08:39.757 "bdev_aio_delete", 00:08:39.757 "bdev_aio_rescan", 00:08:39.757 "bdev_aio_create", 00:08:39.757 "bdev_ftl_set_property", 00:08:39.757 "bdev_ftl_get_properties", 00:08:39.757 "bdev_ftl_get_stats", 00:08:39.757 "bdev_ftl_unmap", 00:08:39.757 "bdev_ftl_unload", 00:08:39.757 "bdev_ftl_delete", 00:08:39.757 "bdev_ftl_load", 00:08:39.757 "bdev_ftl_create", 00:08:39.757 "bdev_virtio_attach_controller", 00:08:39.757 "bdev_virtio_scsi_get_devices", 00:08:39.757 "bdev_virtio_detach_controller", 00:08:39.757 "bdev_virtio_blk_set_hotplug", 00:08:39.757 "bdev_iscsi_delete", 00:08:39.757 "bdev_iscsi_create", 00:08:39.757 "bdev_iscsi_set_options", 00:08:39.757 "accel_error_inject_error", 00:08:39.757 "ioat_scan_accel_module", 00:08:39.757 "dsa_scan_accel_module", 00:08:39.757 "iaa_scan_accel_module", 00:08:39.757 "vfu_virtio_create_fs_endpoint", 00:08:39.757 "vfu_virtio_create_scsi_endpoint", 00:08:39.757 "vfu_virtio_scsi_remove_target", 00:08:39.757 "vfu_virtio_scsi_add_target", 00:08:39.757 "vfu_virtio_create_blk_endpoint", 00:08:39.757 "vfu_virtio_delete_endpoint", 00:08:39.757 "keyring_file_remove_key", 00:08:39.757 "keyring_file_add_key", 00:08:39.757 "keyring_linux_set_options", 00:08:39.757 "fsdev_aio_delete", 00:08:39.757 "fsdev_aio_create", 00:08:39.757 "iscsi_get_histogram", 00:08:39.757 "iscsi_enable_histogram", 00:08:39.757 "iscsi_set_options", 00:08:39.757 "iscsi_get_auth_groups", 00:08:39.757 "iscsi_auth_group_remove_secret", 00:08:39.757 "iscsi_auth_group_add_secret", 00:08:39.757 "iscsi_delete_auth_group", 00:08:39.757 "iscsi_create_auth_group", 00:08:39.757 "iscsi_set_discovery_auth", 00:08:39.757 "iscsi_get_options", 00:08:39.757 "iscsi_target_node_request_logout", 00:08:39.757 "iscsi_target_node_set_redirect", 00:08:39.757 "iscsi_target_node_set_auth", 00:08:39.757 "iscsi_target_node_add_lun", 00:08:39.757 "iscsi_get_stats", 00:08:39.757 "iscsi_get_connections", 00:08:39.757 "iscsi_portal_group_set_auth", 00:08:39.757 "iscsi_start_portal_group", 00:08:39.757 "iscsi_delete_portal_group", 00:08:39.757 "iscsi_create_portal_group", 00:08:39.757 "iscsi_get_portal_groups", 00:08:39.757 "iscsi_delete_target_node", 00:08:39.757 "iscsi_target_node_remove_pg_ig_maps", 00:08:39.757 "iscsi_target_node_add_pg_ig_maps", 00:08:39.757 "iscsi_create_target_node", 00:08:39.757 "iscsi_get_target_nodes", 00:08:39.757 "iscsi_delete_initiator_group", 00:08:39.757 "iscsi_initiator_group_remove_initiators", 00:08:39.757 "iscsi_initiator_group_add_initiators", 00:08:39.757 "iscsi_create_initiator_group", 00:08:39.757 "iscsi_get_initiator_groups", 00:08:39.757 "nvmf_set_crdt", 00:08:39.757 "nvmf_set_config", 00:08:39.757 "nvmf_set_max_subsystems", 00:08:39.757 "nvmf_stop_mdns_prr", 00:08:39.757 "nvmf_publish_mdns_prr", 00:08:39.757 "nvmf_subsystem_get_listeners", 00:08:39.757 "nvmf_subsystem_get_qpairs", 00:08:39.757 "nvmf_subsystem_get_controllers", 00:08:39.757 "nvmf_get_stats", 00:08:39.757 "nvmf_get_transports", 00:08:39.757 "nvmf_create_transport", 00:08:39.757 "nvmf_get_targets", 00:08:39.757 "nvmf_delete_target", 00:08:39.757 "nvmf_create_target", 00:08:39.757 "nvmf_subsystem_allow_any_host", 00:08:39.757 "nvmf_subsystem_set_keys", 00:08:39.757 "nvmf_subsystem_remove_host", 00:08:39.757 "nvmf_subsystem_add_host", 00:08:39.757 "nvmf_ns_remove_host", 00:08:39.757 "nvmf_ns_add_host", 00:08:39.757 "nvmf_subsystem_remove_ns", 00:08:39.757 "nvmf_subsystem_set_ns_ana_group", 00:08:39.757 "nvmf_subsystem_add_ns", 00:08:39.757 "nvmf_subsystem_listener_set_ana_state", 00:08:39.757 "nvmf_discovery_get_referrals", 00:08:39.757 "nvmf_discovery_remove_referral", 00:08:39.757 "nvmf_discovery_add_referral", 00:08:39.757 "nvmf_subsystem_remove_listener", 00:08:39.757 "nvmf_subsystem_add_listener", 00:08:39.757 "nvmf_delete_subsystem", 00:08:39.757 "nvmf_create_subsystem", 00:08:39.757 "nvmf_get_subsystems", 00:08:39.757 "env_dpdk_get_mem_stats", 00:08:39.757 "nbd_get_disks", 00:08:39.757 "nbd_stop_disk", 00:08:39.757 "nbd_start_disk", 00:08:39.757 "ublk_recover_disk", 00:08:39.757 "ublk_get_disks", 00:08:39.757 "ublk_stop_disk", 00:08:39.757 "ublk_start_disk", 00:08:39.757 "ublk_destroy_target", 00:08:39.757 "ublk_create_target", 00:08:39.757 "virtio_blk_create_transport", 00:08:39.757 "virtio_blk_get_transports", 00:08:39.757 "vhost_controller_set_coalescing", 00:08:39.757 "vhost_get_controllers", 00:08:39.757 "vhost_delete_controller", 00:08:39.757 "vhost_create_blk_controller", 00:08:39.757 "vhost_scsi_controller_remove_target", 00:08:39.757 "vhost_scsi_controller_add_target", 00:08:39.757 "vhost_start_scsi_controller", 00:08:39.757 "vhost_create_scsi_controller", 00:08:39.757 "thread_set_cpumask", 00:08:39.757 "scheduler_set_options", 00:08:39.757 "framework_get_governor", 00:08:39.757 "framework_get_scheduler", 00:08:39.757 "framework_set_scheduler", 00:08:39.757 "framework_get_reactors", 00:08:39.757 "thread_get_io_channels", 00:08:39.757 "thread_get_pollers", 00:08:39.757 "thread_get_stats", 00:08:39.757 "framework_monitor_context_switch", 00:08:39.757 "spdk_kill_instance", 00:08:39.757 "log_enable_timestamps", 00:08:39.757 "log_get_flags", 00:08:39.757 "log_clear_flag", 00:08:39.757 "log_set_flag", 00:08:39.757 "log_get_level", 00:08:39.757 "log_set_level", 00:08:39.757 "log_get_print_level", 00:08:39.757 "log_set_print_level", 00:08:39.757 "framework_enable_cpumask_locks", 00:08:39.757 "framework_disable_cpumask_locks", 00:08:39.757 "framework_wait_init", 00:08:39.757 "framework_start_init", 00:08:39.757 "scsi_get_devices", 00:08:39.757 "bdev_get_histogram", 00:08:39.757 "bdev_enable_histogram", 00:08:39.757 "bdev_set_qos_limit", 00:08:39.757 "bdev_set_qd_sampling_period", 00:08:39.757 "bdev_get_bdevs", 00:08:39.757 "bdev_reset_iostat", 00:08:39.757 "bdev_get_iostat", 00:08:39.757 "bdev_examine", 00:08:39.757 "bdev_wait_for_examine", 00:08:39.757 "bdev_set_options", 00:08:39.757 "accel_get_stats", 00:08:39.757 "accel_set_options", 00:08:39.757 "accel_set_driver", 00:08:39.757 "accel_crypto_key_destroy", 00:08:39.757 "accel_crypto_keys_get", 00:08:39.757 "accel_crypto_key_create", 00:08:39.757 "accel_assign_opc", 00:08:39.757 "accel_get_module_info", 00:08:39.757 "accel_get_opc_assignments", 00:08:39.757 "vmd_rescan", 00:08:39.757 "vmd_remove_device", 00:08:39.757 "vmd_enable", 00:08:39.757 "sock_get_default_impl", 00:08:39.757 "sock_set_default_impl", 00:08:39.757 "sock_impl_set_options", 00:08:39.757 "sock_impl_get_options", 00:08:39.757 "iobuf_get_stats", 00:08:39.757 "iobuf_set_options", 00:08:39.757 "keyring_get_keys", 00:08:39.757 "vfu_tgt_set_base_path", 00:08:39.757 "framework_get_pci_devices", 00:08:39.758 "framework_get_config", 00:08:39.758 "framework_get_subsystems", 00:08:39.758 "fsdev_set_opts", 00:08:39.758 "fsdev_get_opts", 00:08:39.758 "trace_get_info", 00:08:39.758 "trace_get_tpoint_group_mask", 00:08:39.758 "trace_disable_tpoint_group", 00:08:39.758 "trace_enable_tpoint_group", 00:08:39.758 "trace_clear_tpoint_mask", 00:08:39.758 "trace_set_tpoint_mask", 00:08:39.758 "notify_get_notifications", 00:08:39.758 "notify_get_types", 00:08:39.758 "spdk_get_version", 00:08:39.758 "rpc_get_methods" 00:08:39.758 ] 00:08:39.758 13:40:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:39.758 13:40:22 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.758 13:40:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:39.758 13:40:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:39.758 13:40:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 477317 00:08:39.758 13:40:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 477317 ']' 00:08:39.758 13:40:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 477317 00:08:39.758 13:40:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:39.758 13:40:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.758 13:40:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 477317 00:08:39.758 13:40:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.758 13:40:22 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.758 13:40:22 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 477317' 00:08:39.758 killing process with pid 477317 00:08:39.758 13:40:22 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 477317 00:08:39.758 13:40:22 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 477317 00:08:40.017 00:08:40.017 real 0m1.642s 00:08:40.017 user 0m3.046s 00:08:40.017 sys 0m0.474s 00:08:40.017 13:40:22 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.017 13:40:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:40.017 ************************************ 00:08:40.017 END TEST spdkcli_tcp 00:08:40.017 ************************************ 00:08:40.017 13:40:22 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:40.017 13:40:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.017 13:40:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.017 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:08:40.276 ************************************ 00:08:40.277 START TEST dpdk_mem_utility 00:08:40.277 ************************************ 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:40.277 * Looking for test storage... 00:08:40.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.277 13:40:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:40.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.277 --rc genhtml_branch_coverage=1 00:08:40.277 --rc genhtml_function_coverage=1 00:08:40.277 --rc genhtml_legend=1 00:08:40.277 --rc geninfo_all_blocks=1 00:08:40.277 --rc geninfo_unexecuted_blocks=1 00:08:40.277 00:08:40.277 ' 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:40.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.277 --rc genhtml_branch_coverage=1 00:08:40.277 --rc genhtml_function_coverage=1 00:08:40.277 --rc genhtml_legend=1 00:08:40.277 --rc geninfo_all_blocks=1 00:08:40.277 --rc geninfo_unexecuted_blocks=1 00:08:40.277 00:08:40.277 ' 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:40.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.277 --rc genhtml_branch_coverage=1 00:08:40.277 --rc genhtml_function_coverage=1 00:08:40.277 --rc genhtml_legend=1 00:08:40.277 --rc geninfo_all_blocks=1 00:08:40.277 --rc geninfo_unexecuted_blocks=1 00:08:40.277 00:08:40.277 ' 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:40.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.277 --rc genhtml_branch_coverage=1 00:08:40.277 --rc genhtml_function_coverage=1 00:08:40.277 --rc genhtml_legend=1 00:08:40.277 --rc geninfo_all_blocks=1 00:08:40.277 --rc geninfo_unexecuted_blocks=1 00:08:40.277 00:08:40.277 ' 00:08:40.277 13:40:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:40.277 13:40:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=477638 00:08:40.277 13:40:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 477638 00:08:40.277 13:40:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 477638 ']' 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.277 13:40:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:40.277 [2024-12-05 13:40:22.843055] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:40.277 [2024-12-05 13:40:22.843103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477638 ] 00:08:40.536 [2024-12-05 13:40:22.915170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.536 [2024-12-05 13:40:22.955666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.799 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.799 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:40.799 13:40:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:40.799 13:40:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:40.799 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.799 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:40.799 { 00:08:40.799 "filename": "/tmp/spdk_mem_dump.txt" 00:08:40.799 } 00:08:40.799 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.799 13:40:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:40.799 DPDK memory size 818.000000 MiB in 1 heap(s) 00:08:40.799 1 heaps totaling size 818.000000 MiB 00:08:40.799 size: 818.000000 MiB heap id: 0 00:08:40.799 end heaps---------- 00:08:40.799 9 mempools totaling size 603.782043 MiB 00:08:40.799 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:40.799 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:40.799 size: 100.555481 MiB name: bdev_io_477638 00:08:40.799 size: 50.003479 MiB name: msgpool_477638 00:08:40.799 size: 36.509338 MiB name: fsdev_io_477638 00:08:40.799 size: 21.763794 MiB name: PDU_Pool 00:08:40.799 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:40.799 size: 4.133484 MiB name: evtpool_477638 00:08:40.799 size: 0.026123 MiB name: Session_Pool 00:08:40.799 end mempools------- 00:08:40.799 6 memzones totaling size 4.142822 MiB 00:08:40.799 size: 1.000366 MiB name: RG_ring_0_477638 00:08:40.799 size: 1.000366 MiB name: RG_ring_1_477638 00:08:40.799 size: 1.000366 MiB name: RG_ring_4_477638 00:08:40.800 size: 1.000366 MiB name: RG_ring_5_477638 00:08:40.800 size: 0.125366 MiB name: RG_ring_2_477638 00:08:40.800 size: 0.015991 MiB name: RG_ring_3_477638 00:08:40.800 end memzones------- 00:08:40.800 13:40:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:40.800 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:08:40.800 list of free elements. size: 10.852478 MiB 00:08:40.800 element at address: 0x200019200000 with size: 0.999878 MiB 00:08:40.800 element at address: 0x200019400000 with size: 0.999878 MiB 00:08:40.800 element at address: 0x200000400000 with size: 0.998535 MiB 00:08:40.800 element at address: 0x200032000000 with size: 0.994446 MiB 00:08:40.800 element at address: 0x200006400000 with size: 0.959839 MiB 00:08:40.800 element at address: 0x200012c00000 with size: 0.944275 MiB 00:08:40.800 element at address: 0x200019600000 with size: 0.936584 MiB 00:08:40.800 element at address: 0x200000200000 with size: 0.717346 MiB 00:08:40.800 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:08:40.800 element at address: 0x200000c00000 with size: 0.495422 MiB 00:08:40.800 element at address: 0x20000a600000 with size: 0.490723 MiB 00:08:40.800 element at address: 0x200019800000 with size: 0.485657 MiB 00:08:40.800 element at address: 0x200003e00000 with size: 0.481934 MiB 00:08:40.800 element at address: 0x200028200000 with size: 0.410034 MiB 00:08:40.800 element at address: 0x200000800000 with size: 0.355042 MiB 00:08:40.800 list of standard malloc elements. size: 199.218628 MiB 00:08:40.800 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:08:40.800 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:08:40.800 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:40.800 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:08:40.800 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:08:40.800 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:40.800 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:08:40.800 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:40.800 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:08:40.800 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:40.800 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:40.800 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:40.800 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:40.800 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:08:40.800 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:40.800 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:40.800 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:08:40.800 element at address: 0x20000085b040 with size: 0.000183 MiB 00:08:40.800 element at address: 0x20000085f300 with size: 0.000183 MiB 00:08:40.800 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:08:40.800 element at address: 0x20000087f680 with size: 0.000183 MiB 00:08:40.800 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:08:40.800 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:40.800 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:40.800 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:40.800 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:40.800 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:08:40.800 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:08:40.800 element at address: 0x200003efb980 with size: 0.000183 MiB 00:08:40.800 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:08:40.800 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:08:40.800 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:08:40.800 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:08:40.800 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:08:40.800 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:08:40.800 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:08:40.800 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:08:40.800 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:08:40.800 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:08:40.800 element at address: 0x200028268f80 with size: 0.000183 MiB 00:08:40.800 element at address: 0x200028269040 with size: 0.000183 MiB 00:08:40.800 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:08:40.800 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:08:40.800 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:08:40.800 list of memzone associated elements. size: 607.928894 MiB 00:08:40.800 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:08:40.800 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:40.800 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:08:40.800 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:40.800 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:08:40.800 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_477638_0 00:08:40.800 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:40.800 associated memzone info: size: 48.002930 MiB name: MP_msgpool_477638_0 00:08:40.800 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:08:40.800 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_477638_0 00:08:40.800 element at address: 0x2000199be940 with size: 20.255554 MiB 00:08:40.800 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:40.800 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:08:40.800 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:40.800 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:40.800 associated memzone info: size: 3.000122 MiB name: MP_evtpool_477638_0 00:08:40.800 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:40.800 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_477638 00:08:40.800 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:40.800 associated memzone info: size: 1.007996 MiB name: MP_evtpool_477638 00:08:40.800 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:08:40.800 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:40.800 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:08:40.800 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:40.800 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:08:40.800 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:40.800 element at address: 0x200003efba40 with size: 1.008118 MiB 00:08:40.800 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:40.800 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:40.800 associated memzone info: size: 1.000366 MiB name: RG_ring_0_477638 00:08:40.800 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:40.800 associated memzone info: size: 1.000366 MiB name: RG_ring_1_477638 00:08:40.800 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:08:40.800 associated memzone info: size: 1.000366 MiB name: RG_ring_4_477638 00:08:40.800 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:08:40.800 associated memzone info: size: 1.000366 MiB name: RG_ring_5_477638 00:08:40.800 element at address: 0x20000087f740 with size: 0.500488 MiB 00:08:40.800 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_477638 00:08:40.800 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:40.800 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_477638 00:08:40.800 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:08:40.800 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:40.800 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:08:40.800 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:40.800 element at address: 0x20001987c540 with size: 0.250488 MiB 00:08:40.800 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:40.800 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:08:40.800 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_477638 00:08:40.800 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:08:40.800 associated memzone info: size: 0.125366 MiB name: RG_ring_2_477638 00:08:40.800 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:08:40.800 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:40.800 element at address: 0x200028269100 with size: 0.023743 MiB 00:08:40.800 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:40.800 element at address: 0x20000085b100 with size: 0.016113 MiB 00:08:40.800 associated memzone info: size: 0.015991 MiB name: RG_ring_3_477638 00:08:40.800 element at address: 0x20002826f240 with size: 0.002441 MiB 00:08:40.800 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:40.800 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:08:40.800 associated memzone info: size: 0.000183 MiB name: MP_msgpool_477638 00:08:40.800 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:08:40.800 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_477638 00:08:40.800 element at address: 0x20000085af00 with size: 0.000305 MiB 00:08:40.800 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_477638 00:08:40.800 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:08:40.800 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:40.800 13:40:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:40.800 13:40:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 477638 00:08:40.801 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 477638 ']' 00:08:40.801 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 477638 00:08:40.801 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:40.801 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.801 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 477638 00:08:40.801 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.801 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.801 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 477638' 00:08:40.801 killing process with pid 477638 00:08:40.801 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 477638 00:08:40.801 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 477638 00:08:41.367 00:08:41.367 real 0m1.029s 00:08:41.367 user 0m0.961s 00:08:41.367 sys 0m0.415s 00:08:41.367 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.367 13:40:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:41.367 ************************************ 00:08:41.367 END TEST dpdk_mem_utility 00:08:41.367 ************************************ 00:08:41.367 13:40:23 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:41.367 13:40:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.367 13:40:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.367 13:40:23 -- common/autotest_common.sh@10 -- # set +x 00:08:41.367 ************************************ 00:08:41.367 START TEST event 00:08:41.367 ************************************ 00:08:41.367 13:40:23 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:41.367 * Looking for test storage... 00:08:41.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:41.367 13:40:23 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:41.367 13:40:23 event -- common/autotest_common.sh@1711 -- # lcov --version 00:08:41.367 13:40:23 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:41.367 13:40:23 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:41.367 13:40:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.367 13:40:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.367 13:40:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.367 13:40:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.367 13:40:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.367 13:40:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.367 13:40:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.367 13:40:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.367 13:40:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.367 13:40:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.367 13:40:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.367 13:40:23 event -- scripts/common.sh@344 -- # case "$op" in 00:08:41.367 13:40:23 event -- scripts/common.sh@345 -- # : 1 00:08:41.367 13:40:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.367 13:40:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.367 13:40:23 event -- scripts/common.sh@365 -- # decimal 1 00:08:41.367 13:40:23 event -- scripts/common.sh@353 -- # local d=1 00:08:41.367 13:40:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.367 13:40:23 event -- scripts/common.sh@355 -- # echo 1 00:08:41.367 13:40:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.367 13:40:23 event -- scripts/common.sh@366 -- # decimal 2 00:08:41.367 13:40:23 event -- scripts/common.sh@353 -- # local d=2 00:08:41.367 13:40:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.367 13:40:23 event -- scripts/common.sh@355 -- # echo 2 00:08:41.367 13:40:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.367 13:40:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.367 13:40:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.367 13:40:23 event -- scripts/common.sh@368 -- # return 0 00:08:41.367 13:40:23 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.367 13:40:23 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:41.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.367 --rc genhtml_branch_coverage=1 00:08:41.367 --rc genhtml_function_coverage=1 00:08:41.367 --rc genhtml_legend=1 00:08:41.368 --rc geninfo_all_blocks=1 00:08:41.368 --rc geninfo_unexecuted_blocks=1 00:08:41.368 00:08:41.368 ' 00:08:41.368 13:40:23 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:41.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.368 --rc genhtml_branch_coverage=1 00:08:41.368 --rc genhtml_function_coverage=1 00:08:41.368 --rc genhtml_legend=1 00:08:41.368 --rc geninfo_all_blocks=1 00:08:41.368 --rc geninfo_unexecuted_blocks=1 00:08:41.368 00:08:41.368 ' 00:08:41.368 13:40:23 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:41.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.368 --rc genhtml_branch_coverage=1 00:08:41.368 --rc genhtml_function_coverage=1 00:08:41.368 --rc genhtml_legend=1 00:08:41.368 --rc geninfo_all_blocks=1 00:08:41.368 --rc geninfo_unexecuted_blocks=1 00:08:41.368 00:08:41.368 ' 00:08:41.368 13:40:23 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:41.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.368 --rc genhtml_branch_coverage=1 00:08:41.368 --rc genhtml_function_coverage=1 00:08:41.368 --rc genhtml_legend=1 00:08:41.368 --rc geninfo_all_blocks=1 00:08:41.368 --rc geninfo_unexecuted_blocks=1 00:08:41.368 00:08:41.368 ' 00:08:41.368 13:40:23 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:41.368 13:40:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:41.368 13:40:23 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:41.368 13:40:23 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:41.368 13:40:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.368 13:40:23 event -- common/autotest_common.sh@10 -- # set +x 00:08:41.368 ************************************ 00:08:41.368 START TEST event_perf 00:08:41.368 ************************************ 00:08:41.368 13:40:23 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:41.368 Running I/O for 1 seconds...[2024-12-05 13:40:23.935756] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:41.368 [2024-12-05 13:40:23.935826] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477928 ] 00:08:41.626 [2024-12-05 13:40:24.015942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.626 [2024-12-05 13:40:24.059668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.626 [2024-12-05 13:40:24.059780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.626 [2024-12-05 13:40:24.059886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.626 [2024-12-05 13:40:24.059887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.560 Running I/O for 1 seconds... 00:08:42.560 lcore 0: 207490 00:08:42.560 lcore 1: 207488 00:08:42.560 lcore 2: 207490 00:08:42.560 lcore 3: 207490 00:08:42.560 done. 00:08:42.560 00:08:42.560 real 0m1.185s 00:08:42.560 user 0m4.097s 00:08:42.560 sys 0m0.084s 00:08:42.560 13:40:25 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.560 13:40:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:42.560 ************************************ 00:08:42.560 END TEST event_perf 00:08:42.560 ************************************ 00:08:42.560 13:40:25 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:42.560 13:40:25 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:42.560 13:40:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.560 13:40:25 event -- common/autotest_common.sh@10 -- # set +x 00:08:42.825 ************************************ 00:08:42.825 START TEST event_reactor 00:08:42.825 ************************************ 00:08:42.825 13:40:25 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:42.825 [2024-12-05 13:40:25.190683] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:42.825 [2024-12-05 13:40:25.190757] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478180 ] 00:08:42.825 [2024-12-05 13:40:25.269847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.825 [2024-12-05 13:40:25.309185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.763 test_start 00:08:43.763 oneshot 00:08:43.763 tick 100 00:08:43.763 tick 100 00:08:43.763 tick 250 00:08:43.763 tick 100 00:08:43.763 tick 100 00:08:43.763 tick 100 00:08:43.763 tick 250 00:08:43.763 tick 500 00:08:43.763 tick 100 00:08:43.763 tick 100 00:08:43.763 tick 250 00:08:43.763 tick 100 00:08:43.763 tick 100 00:08:43.763 test_end 00:08:43.763 00:08:43.763 real 0m1.177s 00:08:43.763 user 0m1.099s 00:08:43.763 sys 0m0.073s 00:08:43.763 13:40:26 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.763 13:40:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:43.763 ************************************ 00:08:43.763 END TEST event_reactor 00:08:43.763 ************************************ 00:08:44.022 13:40:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:44.022 13:40:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:44.022 13:40:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.022 13:40:26 event -- common/autotest_common.sh@10 -- # set +x 00:08:44.022 ************************************ 00:08:44.022 START TEST event_reactor_perf 00:08:44.022 ************************************ 00:08:44.022 13:40:26 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:44.022 [2024-12-05 13:40:26.435440] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:44.023 [2024-12-05 13:40:26.435512] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478431 ] 00:08:44.023 [2024-12-05 13:40:26.513085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.023 [2024-12-05 13:40:26.552072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.399 test_start 00:08:45.399 test_end 00:08:45.399 Performance: 502272 events per second 00:08:45.399 00:08:45.399 real 0m1.179s 00:08:45.399 user 0m1.097s 00:08:45.399 sys 0m0.079s 00:08:45.399 13:40:27 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.399 13:40:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:45.399 ************************************ 00:08:45.399 END TEST event_reactor_perf 00:08:45.399 ************************************ 00:08:45.399 13:40:27 event -- event/event.sh@49 -- # uname -s 00:08:45.399 13:40:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:45.399 13:40:27 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:45.399 13:40:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.399 13:40:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.399 13:40:27 event -- common/autotest_common.sh@10 -- # set +x 00:08:45.399 ************************************ 00:08:45.399 START TEST event_scheduler 00:08:45.399 ************************************ 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:45.399 * Looking for test storage... 00:08:45.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.399 13:40:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:45.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.399 --rc genhtml_branch_coverage=1 00:08:45.399 --rc genhtml_function_coverage=1 00:08:45.399 --rc genhtml_legend=1 00:08:45.399 --rc geninfo_all_blocks=1 00:08:45.399 --rc geninfo_unexecuted_blocks=1 00:08:45.399 00:08:45.399 ' 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:45.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.399 --rc genhtml_branch_coverage=1 00:08:45.399 --rc genhtml_function_coverage=1 00:08:45.399 --rc genhtml_legend=1 00:08:45.399 --rc geninfo_all_blocks=1 00:08:45.399 --rc geninfo_unexecuted_blocks=1 00:08:45.399 00:08:45.399 ' 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:45.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.399 --rc genhtml_branch_coverage=1 00:08:45.399 --rc genhtml_function_coverage=1 00:08:45.399 --rc genhtml_legend=1 00:08:45.399 --rc geninfo_all_blocks=1 00:08:45.399 --rc geninfo_unexecuted_blocks=1 00:08:45.399 00:08:45.399 ' 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:45.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.399 --rc genhtml_branch_coverage=1 00:08:45.399 --rc genhtml_function_coverage=1 00:08:45.399 --rc genhtml_legend=1 00:08:45.399 --rc geninfo_all_blocks=1 00:08:45.399 --rc geninfo_unexecuted_blocks=1 00:08:45.399 00:08:45.399 ' 00:08:45.399 13:40:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:45.399 13:40:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=478715 00:08:45.399 13:40:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:45.399 13:40:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:45.399 13:40:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 478715 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 478715 ']' 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.399 13:40:27 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.400 13:40:27 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.400 13:40:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:45.400 [2024-12-05 13:40:27.889876] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:45.400 [2024-12-05 13:40:27.889923] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478715 ] 00:08:45.400 [2024-12-05 13:40:27.962208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.659 [2024-12-05 13:40:28.006854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.659 [2024-12-05 13:40:28.006961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.659 [2024-12-05 13:40:28.006989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.659 [2024-12-05 13:40:28.006991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.659 13:40:28 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.659 13:40:28 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:45.659 13:40:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:45.659 13:40:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.659 13:40:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:45.659 [2024-12-05 13:40:28.047636] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:45.659 [2024-12-05 13:40:28.047653] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:45.659 [2024-12-05 13:40:28.047663] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:45.659 [2024-12-05 13:40:28.047668] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:45.659 [2024-12-05 13:40:28.047673] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:45.659 13:40:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.659 13:40:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:45.659 13:40:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.659 13:40:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:45.659 [2024-12-05 13:40:28.122308] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:45.659 13:40:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.659 13:40:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:45.659 13:40:28 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.659 13:40:28 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.659 13:40:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:45.659 ************************************ 00:08:45.659 START TEST scheduler_create_thread 00:08:45.659 ************************************ 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:45.659 2 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:45.659 3 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:45.659 4 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:45.659 5 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:45.659 6 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.659 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:45.659 7 00:08:45.660 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.660 13:40:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:45.660 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.660 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:45.660 8 00:08:45.660 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.660 13:40:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:45.660 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.660 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:45.660 9 00:08:45.660 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.660 13:40:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:45.660 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.660 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:45.660 10 00:08:45.660 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.918 13:40:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:45.918 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.918 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:45.918 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.918 13:40:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:45.918 13:40:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:45.918 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.918 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.177 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.177 13:40:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:46.177 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.177 13:40:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:48.080 13:40:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.080 13:40:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:48.080 13:40:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:48.080 13:40:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.080 13:40:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:49.013 13:40:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.013 00:08:49.013 real 0m3.102s 00:08:49.013 user 0m0.022s 00:08:49.013 sys 0m0.007s 00:08:49.014 13:40:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.014 13:40:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:49.014 ************************************ 00:08:49.014 END TEST scheduler_create_thread 00:08:49.014 ************************************ 00:08:49.014 13:40:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:49.014 13:40:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 478715 00:08:49.014 13:40:31 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 478715 ']' 00:08:49.014 13:40:31 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 478715 00:08:49.014 13:40:31 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:49.014 13:40:31 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.014 13:40:31 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 478715 00:08:49.014 13:40:31 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:49.014 13:40:31 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:49.014 13:40:31 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 478715' 00:08:49.014 killing process with pid 478715 00:08:49.014 13:40:31 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 478715 00:08:49.014 13:40:31 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 478715 00:08:49.270 [2024-12-05 13:40:31.637631] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:49.270 00:08:49.270 real 0m4.153s 00:08:49.270 user 0m6.643s 00:08:49.270 sys 0m0.357s 00:08:49.270 13:40:31 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.270 13:40:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:49.270 ************************************ 00:08:49.270 END TEST event_scheduler 00:08:49.270 ************************************ 00:08:49.270 13:40:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:49.528 13:40:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:49.528 13:40:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.528 13:40:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.528 13:40:31 event -- common/autotest_common.sh@10 -- # set +x 00:08:49.528 ************************************ 00:08:49.528 START TEST app_repeat 00:08:49.528 ************************************ 00:08:49.528 13:40:31 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=479457 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 479457' 00:08:49.528 Process app_repeat pid: 479457 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:49.528 spdk_app_start Round 0 00:08:49.528 13:40:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 479457 /var/tmp/spdk-nbd.sock 00:08:49.528 13:40:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 479457 ']' 00:08:49.528 13:40:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:49.528 13:40:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.528 13:40:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:49.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:49.528 13:40:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.528 13:40:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:49.528 [2024-12-05 13:40:31.934663] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:08:49.528 [2024-12-05 13:40:31.934714] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479457 ] 00:08:49.528 [2024-12-05 13:40:32.011996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:49.528 [2024-12-05 13:40:32.052770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.528 [2024-12-05 13:40:32.052771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.787 13:40:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.787 13:40:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:49.787 13:40:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:49.787 Malloc0 00:08:49.787 13:40:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:50.045 Malloc1 00:08:50.045 13:40:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:50.045 13:40:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:50.303 /dev/nbd0 00:08:50.303 13:40:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:50.303 13:40:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:50.303 1+0 records in 00:08:50.303 1+0 records out 00:08:50.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181918 s, 22.5 MB/s 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.303 13:40:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:50.303 13:40:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.303 13:40:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:50.303 13:40:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:50.562 /dev/nbd1 00:08:50.562 13:40:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:50.562 13:40:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:50.562 1+0 records in 00:08:50.562 1+0 records out 00:08:50.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188127 s, 21.8 MB/s 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.562 13:40:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:50.562 13:40:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.562 13:40:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:50.562 13:40:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:50.562 13:40:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.562 13:40:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:50.820 { 00:08:50.820 "nbd_device": "/dev/nbd0", 00:08:50.820 "bdev_name": "Malloc0" 00:08:50.820 }, 00:08:50.820 { 00:08:50.820 "nbd_device": "/dev/nbd1", 00:08:50.820 "bdev_name": "Malloc1" 00:08:50.820 } 00:08:50.820 ]' 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:50.820 { 00:08:50.820 "nbd_device": "/dev/nbd0", 00:08:50.820 "bdev_name": "Malloc0" 00:08:50.820 }, 00:08:50.820 { 00:08:50.820 "nbd_device": "/dev/nbd1", 00:08:50.820 "bdev_name": "Malloc1" 00:08:50.820 } 00:08:50.820 ]' 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:50.820 /dev/nbd1' 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:50.820 /dev/nbd1' 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:50.820 256+0 records in 00:08:50.820 256+0 records out 00:08:50.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100747 s, 104 MB/s 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:50.820 256+0 records in 00:08:50.820 256+0 records out 00:08:50.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139679 s, 75.1 MB/s 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:50.820 256+0 records in 00:08:50.820 256+0 records out 00:08:50.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145696 s, 72.0 MB/s 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.820 13:40:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:51.078 13:40:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:51.078 13:40:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:51.078 13:40:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:51.078 13:40:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.078 13:40:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.078 13:40:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:51.078 13:40:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:51.078 13:40:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.078 13:40:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.078 13:40:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:51.335 13:40:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:51.335 13:40:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:51.335 13:40:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:51.335 13:40:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.335 13:40:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.335 13:40:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:51.335 13:40:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:51.335 13:40:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.335 13:40:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.335 13:40:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.335 13:40:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.593 13:40:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:51.593 13:40:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:51.593 13:40:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.593 13:40:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:51.593 13:40:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:51.593 13:40:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.593 13:40:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:51.593 13:40:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:51.593 13:40:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:51.593 13:40:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:51.593 13:40:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:51.593 13:40:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:51.593 13:40:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:51.852 13:40:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:52.111 [2024-12-05 13:40:34.449963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:52.111 [2024-12-05 13:40:34.487143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.111 [2024-12-05 13:40:34.487143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.111 [2024-12-05 13:40:34.527776] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:52.111 [2024-12-05 13:40:34.527816] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:55.395 13:40:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:55.395 13:40:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:55.395 spdk_app_start Round 1 00:08:55.395 13:40:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 479457 /var/tmp/spdk-nbd.sock 00:08:55.395 13:40:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 479457 ']' 00:08:55.395 13:40:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:55.395 13:40:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.395 13:40:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:55.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:55.395 13:40:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.395 13:40:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:55.395 13:40:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.395 13:40:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:55.395 13:40:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:55.395 Malloc0 00:08:55.395 13:40:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:55.395 Malloc1 00:08:55.395 13:40:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:55.395 13:40:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:55.653 /dev/nbd0 00:08:55.653 13:40:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:55.653 13:40:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:55.653 1+0 records in 00:08:55.653 1+0 records out 00:08:55.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236583 s, 17.3 MB/s 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:55.653 13:40:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:55.653 13:40:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:55.653 13:40:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:55.653 13:40:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:55.911 /dev/nbd1 00:08:55.911 13:40:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:55.912 13:40:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:55.912 1+0 records in 00:08:55.912 1+0 records out 00:08:55.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230202 s, 17.8 MB/s 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:55.912 13:40:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:55.912 13:40:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:55.912 13:40:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:55.912 13:40:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:55.912 13:40:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.912 13:40:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:56.170 { 00:08:56.170 "nbd_device": "/dev/nbd0", 00:08:56.170 "bdev_name": "Malloc0" 00:08:56.170 }, 00:08:56.170 { 00:08:56.170 "nbd_device": "/dev/nbd1", 00:08:56.170 "bdev_name": "Malloc1" 00:08:56.170 } 00:08:56.170 ]' 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:56.170 { 00:08:56.170 "nbd_device": "/dev/nbd0", 00:08:56.170 "bdev_name": "Malloc0" 00:08:56.170 }, 00:08:56.170 { 00:08:56.170 "nbd_device": "/dev/nbd1", 00:08:56.170 "bdev_name": "Malloc1" 00:08:56.170 } 00:08:56.170 ]' 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:56.170 /dev/nbd1' 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:56.170 /dev/nbd1' 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:56.170 256+0 records in 00:08:56.170 256+0 records out 00:08:56.170 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00945536 s, 111 MB/s 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:56.170 256+0 records in 00:08:56.170 256+0 records out 00:08:56.170 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136531 s, 76.8 MB/s 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:56.170 256+0 records in 00:08:56.170 256+0 records out 00:08:56.170 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148699 s, 70.5 MB/s 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.170 13:40:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:56.428 13:40:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:56.428 13:40:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:56.428 13:40:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:56.428 13:40:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.428 13:40:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.428 13:40:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:56.428 13:40:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:56.428 13:40:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.428 13:40:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.428 13:40:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:56.686 13:40:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:56.686 13:40:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:56.686 13:40:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:56.686 13:40:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.686 13:40:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.686 13:40:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:56.686 13:40:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:56.686 13:40:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.686 13:40:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:56.686 13:40:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.686 13:40:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:56.945 13:40:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:56.945 13:40:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:56.945 13:40:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:56.945 13:40:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:56.945 13:40:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:56.945 13:40:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:56.945 13:40:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:56.945 13:40:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:56.945 13:40:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:56.945 13:40:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:56.945 13:40:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:56.945 13:40:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:56.945 13:40:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:57.203 13:40:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:57.203 [2024-12-05 13:40:39.772575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:57.460 [2024-12-05 13:40:39.809846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.460 [2024-12-05 13:40:39.809847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.460 [2024-12-05 13:40:39.850703] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:57.460 [2024-12-05 13:40:39.850743] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:00.744 13:40:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:00.744 13:40:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:00.744 spdk_app_start Round 2 00:09:00.744 13:40:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 479457 /var/tmp/spdk-nbd.sock 00:09:00.744 13:40:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 479457 ']' 00:09:00.744 13:40:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:00.744 13:40:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.744 13:40:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:00.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:00.744 13:40:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.744 13:40:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:00.744 13:40:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.744 13:40:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:00.744 13:40:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:00.744 Malloc0 00:09:00.744 13:40:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:00.744 Malloc1 00:09:00.744 13:40:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:00.744 13:40:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:01.004 /dev/nbd0 00:09:01.004 13:40:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:01.004 13:40:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:01.004 1+0 records in 00:09:01.004 1+0 records out 00:09:01.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229208 s, 17.9 MB/s 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.004 13:40:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:01.004 13:40:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.004 13:40:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:01.004 13:40:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:01.263 /dev/nbd1 00:09:01.263 13:40:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:01.263 13:40:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:01.263 1+0 records in 00:09:01.263 1+0 records out 00:09:01.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000138759 s, 29.5 MB/s 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.263 13:40:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:01.263 13:40:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.263 13:40:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:01.263 13:40:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:01.263 13:40:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.263 13:40:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:01.522 13:40:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:01.522 { 00:09:01.522 "nbd_device": "/dev/nbd0", 00:09:01.522 "bdev_name": "Malloc0" 00:09:01.522 }, 00:09:01.522 { 00:09:01.522 "nbd_device": "/dev/nbd1", 00:09:01.522 "bdev_name": "Malloc1" 00:09:01.522 } 00:09:01.522 ]' 00:09:01.522 13:40:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:01.522 { 00:09:01.522 "nbd_device": "/dev/nbd0", 00:09:01.522 "bdev_name": "Malloc0" 00:09:01.522 }, 00:09:01.522 { 00:09:01.522 "nbd_device": "/dev/nbd1", 00:09:01.522 "bdev_name": "Malloc1" 00:09:01.522 } 00:09:01.522 ]' 00:09:01.522 13:40:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:01.522 13:40:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:01.522 /dev/nbd1' 00:09:01.522 13:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:01.522 13:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:01.522 /dev/nbd1' 00:09:01.522 13:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:01.522 13:40:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:01.522 13:40:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:01.522 13:40:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:01.522 13:40:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:01.523 13:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:01.523 13:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:01.523 13:40:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:01.523 13:40:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:01.523 13:40:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:01.523 13:40:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:01.523 256+0 records in 00:09:01.523 256+0 records out 00:09:01.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107053 s, 97.9 MB/s 00:09:01.523 13:40:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.523 13:40:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:01.523 256+0 records in 00:09:01.523 256+0 records out 00:09:01.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141164 s, 74.3 MB/s 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:01.523 256+0 records in 00:09:01.523 256+0 records out 00:09:01.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152114 s, 68.9 MB/s 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.523 13:40:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:01.781 13:40:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:01.781 13:40:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:01.781 13:40:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:01.781 13:40:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.781 13:40:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.781 13:40:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:01.781 13:40:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:01.781 13:40:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.781 13:40:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.781 13:40:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:02.040 13:40:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:02.040 13:40:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:02.040 13:40:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:02.040 13:40:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.040 13:40:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.040 13:40:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:02.040 13:40:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:02.040 13:40:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.040 13:40:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:02.040 13:40:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.040 13:40:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:02.298 13:40:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:02.298 13:40:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:02.298 13:40:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:02.298 13:40:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:02.298 13:40:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:02.298 13:40:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:02.298 13:40:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:02.298 13:40:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:02.298 13:40:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:02.298 13:40:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:02.298 13:40:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:02.298 13:40:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:02.298 13:40:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:02.557 13:40:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:02.557 [2024-12-05 13:40:45.042227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:02.557 [2024-12-05 13:40:45.078742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.557 [2024-12-05 13:40:45.078744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.557 [2024-12-05 13:40:45.119869] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:02.557 [2024-12-05 13:40:45.119910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:05.844 13:40:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 479457 /var/tmp/spdk-nbd.sock 00:09:05.844 13:40:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 479457 ']' 00:09:05.844 13:40:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:05.844 13:40:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.844 13:40:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:05.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:05.844 13:40:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.844 13:40:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:05.844 13:40:48 event.app_repeat -- event/event.sh@39 -- # killprocess 479457 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 479457 ']' 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 479457 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479457 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479457' 00:09:05.844 killing process with pid 479457 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@973 -- # kill 479457 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@978 -- # wait 479457 00:09:05.844 spdk_app_start is called in Round 0. 00:09:05.844 Shutdown signal received, stop current app iteration 00:09:05.844 Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 reinitialization... 00:09:05.844 spdk_app_start is called in Round 1. 00:09:05.844 Shutdown signal received, stop current app iteration 00:09:05.844 Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 reinitialization... 00:09:05.844 spdk_app_start is called in Round 2. 00:09:05.844 Shutdown signal received, stop current app iteration 00:09:05.844 Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 reinitialization... 00:09:05.844 spdk_app_start is called in Round 3. 00:09:05.844 Shutdown signal received, stop current app iteration 00:09:05.844 13:40:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:05.844 13:40:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:05.844 00:09:05.844 real 0m16.395s 00:09:05.844 user 0m35.966s 00:09:05.844 sys 0m2.611s 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.844 13:40:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:05.844 ************************************ 00:09:05.844 END TEST app_repeat 00:09:05.844 ************************************ 00:09:05.844 13:40:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:05.844 13:40:48 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:05.844 13:40:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.844 13:40:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.844 13:40:48 event -- common/autotest_common.sh@10 -- # set +x 00:09:05.844 ************************************ 00:09:05.844 START TEST cpu_locks 00:09:05.844 ************************************ 00:09:05.844 13:40:48 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:06.103 * Looking for test storage... 00:09:06.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:06.103 13:40:48 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:06.103 13:40:48 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:09:06.103 13:40:48 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:06.103 13:40:48 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.103 13:40:48 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.104 13:40:48 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.104 13:40:48 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:06.104 13:40:48 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.104 13:40:48 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:06.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.104 --rc genhtml_branch_coverage=1 00:09:06.104 --rc genhtml_function_coverage=1 00:09:06.104 --rc genhtml_legend=1 00:09:06.104 --rc geninfo_all_blocks=1 00:09:06.104 --rc geninfo_unexecuted_blocks=1 00:09:06.104 00:09:06.104 ' 00:09:06.104 13:40:48 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:06.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.104 --rc genhtml_branch_coverage=1 00:09:06.104 --rc genhtml_function_coverage=1 00:09:06.104 --rc genhtml_legend=1 00:09:06.104 --rc geninfo_all_blocks=1 00:09:06.104 --rc geninfo_unexecuted_blocks=1 00:09:06.104 00:09:06.104 ' 00:09:06.104 13:40:48 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:06.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.104 --rc genhtml_branch_coverage=1 00:09:06.104 --rc genhtml_function_coverage=1 00:09:06.104 --rc genhtml_legend=1 00:09:06.104 --rc geninfo_all_blocks=1 00:09:06.104 --rc geninfo_unexecuted_blocks=1 00:09:06.104 00:09:06.104 ' 00:09:06.104 13:40:48 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:06.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.104 --rc genhtml_branch_coverage=1 00:09:06.104 --rc genhtml_function_coverage=1 00:09:06.104 --rc genhtml_legend=1 00:09:06.104 --rc geninfo_all_blocks=1 00:09:06.104 --rc geninfo_unexecuted_blocks=1 00:09:06.104 00:09:06.104 ' 00:09:06.104 13:40:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:06.104 13:40:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:06.104 13:40:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:06.104 13:40:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:06.104 13:40:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.104 13:40:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.104 13:40:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:06.104 ************************************ 00:09:06.104 START TEST default_locks 00:09:06.104 ************************************ 00:09:06.104 13:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:06.104 13:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=482452 00:09:06.104 13:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 482452 00:09:06.104 13:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:06.104 13:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 482452 ']' 00:09:06.104 13:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.104 13:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.104 13:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.104 13:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.104 13:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:06.104 [2024-12-05 13:40:48.619926] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:06.104 [2024-12-05 13:40:48.619968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482452 ] 00:09:06.362 [2024-12-05 13:40:48.694616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.362 [2024-12-05 13:40:48.736122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.362 13:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.362 13:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:06.362 13:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 482452 00:09:06.362 13:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 482452 00:09:06.362 13:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:06.621 lslocks: write error 00:09:06.621 13:40:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 482452 00:09:06.621 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 482452 ']' 00:09:06.621 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 482452 00:09:06.621 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:06.621 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.621 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482452 00:09:06.880 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.880 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.880 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482452' 00:09:06.880 killing process with pid 482452 00:09:06.880 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 482452 00:09:06.880 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 482452 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 482452 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 482452 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 482452 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 482452 ']' 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:07.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (482452) - No such process 00:09:07.139 ERROR: process (pid: 482452) is no longer running 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:07.139 00:09:07.139 real 0m0.982s 00:09:07.139 user 0m0.915s 00:09:07.139 sys 0m0.459s 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.139 13:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:07.139 ************************************ 00:09:07.139 END TEST default_locks 00:09:07.139 ************************************ 00:09:07.139 13:40:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:07.139 13:40:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.139 13:40:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.139 13:40:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:07.139 ************************************ 00:09:07.139 START TEST default_locks_via_rpc 00:09:07.139 ************************************ 00:09:07.139 13:40:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:07.139 13:40:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=482710 00:09:07.139 13:40:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 482710 00:09:07.139 13:40:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:07.139 13:40:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 482710 ']' 00:09:07.139 13:40:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.139 13:40:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.139 13:40:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.139 13:40:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.139 13:40:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.139 [2024-12-05 13:40:49.670972] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:07.139 [2024-12-05 13:40:49.671015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482710 ] 00:09:07.397 [2024-12-05 13:40:49.744058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.397 [2024-12-05 13:40:49.785651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.655 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.655 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:07.655 13:40:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:07.655 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.655 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.656 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.656 13:40:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:07.656 13:40:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:07.656 13:40:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:07.656 13:40:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:07.656 13:40:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:07.656 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.656 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.656 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.656 13:40:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 482710 00:09:07.656 13:40:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 482710 00:09:07.656 13:40:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:07.915 13:40:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 482710 00:09:07.915 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 482710 ']' 00:09:07.915 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 482710 00:09:07.915 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:07.915 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.915 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482710 00:09:07.915 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.915 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.915 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482710' 00:09:07.915 killing process with pid 482710 00:09:07.915 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 482710 00:09:07.915 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 482710 00:09:08.175 00:09:08.175 real 0m1.037s 00:09:08.175 user 0m0.995s 00:09:08.175 sys 0m0.463s 00:09:08.175 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.175 13:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.175 ************************************ 00:09:08.175 END TEST default_locks_via_rpc 00:09:08.175 ************************************ 00:09:08.175 13:40:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:08.175 13:40:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.175 13:40:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.175 13:40:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.175 ************************************ 00:09:08.175 START TEST non_locking_app_on_locked_coremask 00:09:08.175 ************************************ 00:09:08.175 13:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:08.175 13:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=482964 00:09:08.175 13:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 482964 /var/tmp/spdk.sock 00:09:08.175 13:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:08.175 13:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 482964 ']' 00:09:08.175 13:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.175 13:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.175 13:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.175 13:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.175 13:40:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.434 [2024-12-05 13:40:50.774033] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:08.434 [2024-12-05 13:40:50.774074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482964 ] 00:09:08.434 [2024-12-05 13:40:50.846664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.434 [2024-12-05 13:40:50.888224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.693 13:40:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.693 13:40:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:08.693 13:40:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=482973 00:09:08.693 13:40:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 482973 /var/tmp/spdk2.sock 00:09:08.693 13:40:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:08.693 13:40:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 482973 ']' 00:09:08.693 13:40:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:08.693 13:40:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.693 13:40:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:08.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:08.693 13:40:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.693 13:40:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.693 [2024-12-05 13:40:51.162327] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:08.693 [2024-12-05 13:40:51.162377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482973 ] 00:09:08.693 [2024-12-05 13:40:51.253833] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:08.693 [2024-12-05 13:40:51.253863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.951 [2024-12-05 13:40:51.342194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.515 13:40:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.515 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:09.515 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 482964 00:09:09.515 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 482964 00:09:09.515 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:10.081 lslocks: write error 00:09:10.081 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 482964 00:09:10.081 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 482964 ']' 00:09:10.081 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 482964 00:09:10.081 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:10.081 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.081 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482964 00:09:10.081 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.081 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.081 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482964' 00:09:10.081 killing process with pid 482964 00:09:10.081 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 482964 00:09:10.081 13:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 482964 00:09:10.646 13:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 482973 00:09:10.646 13:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 482973 ']' 00:09:10.646 13:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 482973 00:09:10.646 13:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:10.646 13:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.646 13:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482973 00:09:10.905 13:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.905 13:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.905 13:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482973' 00:09:10.905 killing process with pid 482973 00:09:10.905 13:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 482973 00:09:10.905 13:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 482973 00:09:11.164 00:09:11.164 real 0m2.846s 00:09:11.164 user 0m2.975s 00:09:11.164 sys 0m0.976s 00:09:11.164 13:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.164 13:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:11.164 ************************************ 00:09:11.164 END TEST non_locking_app_on_locked_coremask 00:09:11.164 ************************************ 00:09:11.164 13:40:53 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:11.164 13:40:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.164 13:40:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.164 13:40:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:11.164 ************************************ 00:09:11.164 START TEST locking_app_on_unlocked_coremask 00:09:11.164 ************************************ 00:09:11.164 13:40:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:11.164 13:40:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=483468 00:09:11.164 13:40:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 483468 /var/tmp/spdk.sock 00:09:11.164 13:40:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:11.164 13:40:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 483468 ']' 00:09:11.164 13:40:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.164 13:40:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.164 13:40:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.164 13:40:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.164 13:40:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:11.164 [2024-12-05 13:40:53.693276] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:11.164 [2024-12-05 13:40:53.693320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483468 ] 00:09:11.423 [2024-12-05 13:40:53.766466] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:11.423 [2024-12-05 13:40:53.766491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.423 [2024-12-05 13:40:53.808110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.682 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.683 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:11.683 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:11.683 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=483473 00:09:11.683 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 483473 /var/tmp/spdk2.sock 00:09:11.683 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 483473 ']' 00:09:11.683 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:11.683 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.683 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:11.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:11.683 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.683 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:11.683 [2024-12-05 13:40:54.049616] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:11.683 [2024-12-05 13:40:54.049663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483473 ] 00:09:11.683 [2024-12-05 13:40:54.134243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.683 [2024-12-05 13:40:54.214350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.621 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.621 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:12.621 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 483473 00:09:12.621 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 483473 00:09:12.621 13:40:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:12.621 lslocks: write error 00:09:12.621 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 483468 00:09:12.621 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 483468 ']' 00:09:12.621 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 483468 00:09:12.621 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:12.621 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.621 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483468 00:09:12.621 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.621 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.621 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483468' 00:09:12.621 killing process with pid 483468 00:09:12.621 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 483468 00:09:12.621 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 483468 00:09:13.186 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 483473 00:09:13.186 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 483473 ']' 00:09:13.186 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 483473 00:09:13.186 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:13.186 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.186 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483473 00:09:13.445 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.445 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.445 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483473' 00:09:13.445 killing process with pid 483473 00:09:13.445 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 483473 00:09:13.445 13:40:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 483473 00:09:13.704 00:09:13.704 real 0m2.460s 00:09:13.704 user 0m2.585s 00:09:13.704 sys 0m0.772s 00:09:13.704 13:40:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.704 13:40:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:13.704 ************************************ 00:09:13.704 END TEST locking_app_on_unlocked_coremask 00:09:13.704 ************************************ 00:09:13.704 13:40:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:13.704 13:40:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.704 13:40:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.704 13:40:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:13.704 ************************************ 00:09:13.704 START TEST locking_app_on_locked_coremask 00:09:13.704 ************************************ 00:09:13.704 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:13.704 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=483961 00:09:13.704 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 483961 /var/tmp/spdk.sock 00:09:13.704 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:13.704 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 483961 ']' 00:09:13.704 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.704 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.704 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.704 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.705 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:13.705 [2024-12-05 13:40:56.223499] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:13.705 [2024-12-05 13:40:56.223544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483961 ] 00:09:13.963 [2024-12-05 13:40:56.298863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.963 [2024-12-05 13:40:56.337622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=483970 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 483970 /var/tmp/spdk2.sock 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 483970 /var/tmp/spdk2.sock 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 483970 /var/tmp/spdk2.sock 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 483970 ']' 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:14.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.222 13:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:14.222 [2024-12-05 13:40:56.611696] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:14.222 [2024-12-05 13:40:56.611738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483970 ] 00:09:14.222 [2024-12-05 13:40:56.698382] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 483961 has claimed it. 00:09:14.222 [2024-12-05 13:40:56.698425] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:14.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (483970) - No such process 00:09:14.788 ERROR: process (pid: 483970) is no longer running 00:09:14.788 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.788 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:14.788 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:14.788 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:14.788 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:14.788 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:14.788 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 483961 00:09:14.788 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 483961 00:09:14.788 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:15.360 lslocks: write error 00:09:15.360 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 483961 00:09:15.360 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 483961 ']' 00:09:15.360 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 483961 00:09:15.360 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:15.360 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.360 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483961 00:09:15.360 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.360 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.360 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483961' 00:09:15.360 killing process with pid 483961 00:09:15.360 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 483961 00:09:15.360 13:40:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 483961 00:09:15.672 00:09:15.672 real 0m1.941s 00:09:15.672 user 0m2.063s 00:09:15.672 sys 0m0.663s 00:09:15.672 13:40:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.672 13:40:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:15.672 ************************************ 00:09:15.672 END TEST locking_app_on_locked_coremask 00:09:15.672 ************************************ 00:09:15.672 13:40:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:15.672 13:40:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.672 13:40:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.672 13:40:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:15.672 ************************************ 00:09:15.672 START TEST locking_overlapped_coremask 00:09:15.672 ************************************ 00:09:15.672 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:15.672 13:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=484231 00:09:15.672 13:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 484231 /var/tmp/spdk.sock 00:09:15.672 13:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:15.672 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 484231 ']' 00:09:15.672 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.672 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.672 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.672 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.672 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:15.672 [2024-12-05 13:40:58.233169] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:15.672 [2024-12-05 13:40:58.233214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484231 ] 00:09:15.994 [2024-12-05 13:40:58.309278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:15.994 [2024-12-05 13:40:58.350271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.994 [2024-12-05 13:40:58.350394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.994 [2024-12-05 13:40:58.350394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=484252 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 484252 /var/tmp/spdk2.sock 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 484252 /var/tmp/spdk2.sock 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 484252 /var/tmp/spdk2.sock 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 484252 ']' 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:15.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.994 13:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:16.278 [2024-12-05 13:40:58.615993] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:16.278 [2024-12-05 13:40:58.616041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484252 ] 00:09:16.278 [2024-12-05 13:40:58.708970] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 484231 has claimed it. 00:09:16.278 [2024-12-05 13:40:58.709011] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:16.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (484252) - No such process 00:09:16.844 ERROR: process (pid: 484252) is no longer running 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 484231 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 484231 ']' 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 484231 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484231 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.844 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.845 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484231' 00:09:16.845 killing process with pid 484231 00:09:16.845 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 484231 00:09:16.845 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 484231 00:09:17.104 00:09:17.104 real 0m1.435s 00:09:17.104 user 0m3.980s 00:09:17.104 sys 0m0.397s 00:09:17.104 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.104 13:40:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:17.104 ************************************ 00:09:17.104 END TEST locking_overlapped_coremask 00:09:17.104 ************************************ 00:09:17.104 13:40:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:17.104 13:40:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.104 13:40:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.104 13:40:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:17.104 ************************************ 00:09:17.104 START TEST locking_overlapped_coremask_via_rpc 00:09:17.104 ************************************ 00:09:17.104 13:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:17.363 13:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=484509 00:09:17.363 13:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 484509 /var/tmp/spdk.sock 00:09:17.363 13:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:17.363 13:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 484509 ']' 00:09:17.363 13:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.363 13:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.363 13:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.363 13:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.363 13:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.363 [2024-12-05 13:40:59.739903] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:17.363 [2024-12-05 13:40:59.739942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484509 ] 00:09:17.363 [2024-12-05 13:40:59.815558] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:17.363 [2024-12-05 13:40:59.815583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:17.363 [2024-12-05 13:40:59.859585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.363 [2024-12-05 13:40:59.859694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.363 [2024-12-05 13:40:59.859694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.623 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.623 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:17.623 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=484580 00:09:17.623 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 484580 /var/tmp/spdk2.sock 00:09:17.623 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:17.623 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 484580 ']' 00:09:17.623 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:17.623 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.623 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:17.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:17.623 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.623 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.623 [2024-12-05 13:41:00.131828] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:17.623 [2024-12-05 13:41:00.131878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484580 ] 00:09:17.881 [2024-12-05 13:41:00.225533] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:17.881 [2024-12-05 13:41:00.225563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:17.881 [2024-12-05 13:41:00.312967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.881 [2024-12-05 13:41:00.316413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.881 [2024-12-05 13:41:00.316414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:18.447 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.447 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:18.447 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:18.447 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.447 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.447 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.447 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:18.447 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:18.447 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:18.448 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:18.448 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.448 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:18.448 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.448 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:18.448 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.448 13:41:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.448 [2024-12-05 13:41:00.993437] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 484509 has claimed it. 00:09:18.448 request: 00:09:18.448 { 00:09:18.448 "method": "framework_enable_cpumask_locks", 00:09:18.448 "req_id": 1 00:09:18.448 } 00:09:18.448 Got JSON-RPC error response 00:09:18.448 response: 00:09:18.448 { 00:09:18.448 "code": -32603, 00:09:18.448 "message": "Failed to claim CPU core: 2" 00:09:18.448 } 00:09:18.448 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:18.448 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:18.448 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:18.448 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:18.448 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:18.448 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 484509 /var/tmp/spdk.sock 00:09:18.448 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 484509 ']' 00:09:18.448 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.448 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.448 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.448 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.448 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.706 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.706 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:18.706 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 484580 /var/tmp/spdk2.sock 00:09:18.706 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 484580 ']' 00:09:18.706 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:18.706 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.706 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:18.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:18.706 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.706 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.965 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.965 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:18.965 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:18.965 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:18.965 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:18.965 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:18.965 00:09:18.965 real 0m1.707s 00:09:18.965 user 0m0.812s 00:09:18.965 sys 0m0.146s 00:09:18.965 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.965 13:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.965 ************************************ 00:09:18.965 END TEST locking_overlapped_coremask_via_rpc 00:09:18.965 ************************************ 00:09:18.965 13:41:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:18.965 13:41:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 484509 ]] 00:09:18.965 13:41:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 484509 00:09:18.965 13:41:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 484509 ']' 00:09:18.965 13:41:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 484509 00:09:18.965 13:41:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:18.965 13:41:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.965 13:41:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484509 00:09:18.965 13:41:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.965 13:41:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.965 13:41:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484509' 00:09:18.965 killing process with pid 484509 00:09:18.965 13:41:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 484509 00:09:18.965 13:41:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 484509 00:09:19.224 13:41:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 484580 ]] 00:09:19.224 13:41:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 484580 00:09:19.224 13:41:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 484580 ']' 00:09:19.224 13:41:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 484580 00:09:19.224 13:41:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:19.224 13:41:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.224 13:41:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484580 00:09:19.483 13:41:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:19.483 13:41:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:19.483 13:41:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484580' 00:09:19.483 killing process with pid 484580 00:09:19.483 13:41:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 484580 00:09:19.483 13:41:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 484580 00:09:19.742 13:41:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:19.742 13:41:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:19.742 13:41:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 484509 ]] 00:09:19.742 13:41:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 484509 00:09:19.742 13:41:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 484509 ']' 00:09:19.742 13:41:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 484509 00:09:19.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (484509) - No such process 00:09:19.742 13:41:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 484509 is not found' 00:09:19.742 Process with pid 484509 is not found 00:09:19.742 13:41:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 484580 ]] 00:09:19.742 13:41:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 484580 00:09:19.742 13:41:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 484580 ']' 00:09:19.742 13:41:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 484580 00:09:19.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (484580) - No such process 00:09:19.742 13:41:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 484580 is not found' 00:09:19.742 Process with pid 484580 is not found 00:09:19.742 13:41:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:19.742 00:09:19.742 real 0m13.797s 00:09:19.742 user 0m24.070s 00:09:19.742 sys 0m4.820s 00:09:19.742 13:41:02 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.742 13:41:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:19.742 ************************************ 00:09:19.742 END TEST cpu_locks 00:09:19.742 ************************************ 00:09:19.742 00:09:19.742 real 0m38.478s 00:09:19.742 user 1m13.249s 00:09:19.742 sys 0m8.377s 00:09:19.742 13:41:02 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.742 13:41:02 event -- common/autotest_common.sh@10 -- # set +x 00:09:19.742 ************************************ 00:09:19.742 END TEST event 00:09:19.742 ************************************ 00:09:19.742 13:41:02 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:19.742 13:41:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.742 13:41:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.742 13:41:02 -- common/autotest_common.sh@10 -- # set +x 00:09:19.742 ************************************ 00:09:19.742 START TEST thread 00:09:19.742 ************************************ 00:09:19.742 13:41:02 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:20.008 * Looking for test storage... 00:09:20.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:20.008 13:41:02 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:20.008 13:41:02 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:20.008 13:41:02 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:20.008 13:41:02 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:20.008 13:41:02 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.008 13:41:02 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.008 13:41:02 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.008 13:41:02 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.008 13:41:02 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.008 13:41:02 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.008 13:41:02 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.008 13:41:02 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.008 13:41:02 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.008 13:41:02 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.008 13:41:02 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.008 13:41:02 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:20.008 13:41:02 thread -- scripts/common.sh@345 -- # : 1 00:09:20.008 13:41:02 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.008 13:41:02 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.008 13:41:02 thread -- scripts/common.sh@365 -- # decimal 1 00:09:20.008 13:41:02 thread -- scripts/common.sh@353 -- # local d=1 00:09:20.008 13:41:02 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.008 13:41:02 thread -- scripts/common.sh@355 -- # echo 1 00:09:20.008 13:41:02 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.008 13:41:02 thread -- scripts/common.sh@366 -- # decimal 2 00:09:20.008 13:41:02 thread -- scripts/common.sh@353 -- # local d=2 00:09:20.008 13:41:02 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.008 13:41:02 thread -- scripts/common.sh@355 -- # echo 2 00:09:20.008 13:41:02 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.008 13:41:02 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.008 13:41:02 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.008 13:41:02 thread -- scripts/common.sh@368 -- # return 0 00:09:20.008 13:41:02 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.008 13:41:02 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:20.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.008 --rc genhtml_branch_coverage=1 00:09:20.008 --rc genhtml_function_coverage=1 00:09:20.008 --rc genhtml_legend=1 00:09:20.008 --rc geninfo_all_blocks=1 00:09:20.008 --rc geninfo_unexecuted_blocks=1 00:09:20.008 00:09:20.008 ' 00:09:20.008 13:41:02 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:20.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.008 --rc genhtml_branch_coverage=1 00:09:20.008 --rc genhtml_function_coverage=1 00:09:20.008 --rc genhtml_legend=1 00:09:20.008 --rc geninfo_all_blocks=1 00:09:20.008 --rc geninfo_unexecuted_blocks=1 00:09:20.008 00:09:20.008 ' 00:09:20.008 13:41:02 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:20.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.008 --rc genhtml_branch_coverage=1 00:09:20.008 --rc genhtml_function_coverage=1 00:09:20.008 --rc genhtml_legend=1 00:09:20.008 --rc geninfo_all_blocks=1 00:09:20.008 --rc geninfo_unexecuted_blocks=1 00:09:20.008 00:09:20.008 ' 00:09:20.008 13:41:02 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:20.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.008 --rc genhtml_branch_coverage=1 00:09:20.008 --rc genhtml_function_coverage=1 00:09:20.008 --rc genhtml_legend=1 00:09:20.008 --rc geninfo_all_blocks=1 00:09:20.008 --rc geninfo_unexecuted_blocks=1 00:09:20.008 00:09:20.008 ' 00:09:20.008 13:41:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:20.008 13:41:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:20.008 13:41:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.008 13:41:02 thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.008 ************************************ 00:09:20.008 START TEST thread_poller_perf 00:09:20.008 ************************************ 00:09:20.009 13:41:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:20.009 [2024-12-05 13:41:02.485606] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:20.009 [2024-12-05 13:41:02.485676] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485082 ] 00:09:20.009 [2024-12-05 13:41:02.564880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.267 [2024-12-05 13:41:02.605705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.267 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:21.203 [2024-12-05T12:41:03.790Z] ====================================== 00:09:21.203 [2024-12-05T12:41:03.790Z] busy:2104070176 (cyc) 00:09:21.203 [2024-12-05T12:41:03.790Z] total_run_count: 423000 00:09:21.203 [2024-12-05T12:41:03.790Z] tsc_hz: 2100000000 (cyc) 00:09:21.203 [2024-12-05T12:41:03.790Z] ====================================== 00:09:21.203 [2024-12-05T12:41:03.790Z] poller_cost: 4974 (cyc), 2368 (nsec) 00:09:21.203 00:09:21.203 real 0m1.184s 00:09:21.203 user 0m1.110s 00:09:21.203 sys 0m0.070s 00:09:21.203 13:41:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.203 13:41:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:21.203 ************************************ 00:09:21.203 END TEST thread_poller_perf 00:09:21.203 ************************************ 00:09:21.203 13:41:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:21.203 13:41:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:21.203 13:41:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.203 13:41:03 thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.203 ************************************ 00:09:21.203 START TEST thread_poller_perf 00:09:21.203 ************************************ 00:09:21.203 13:41:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:21.203 [2024-12-05 13:41:03.737342] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:21.203 [2024-12-05 13:41:03.737414] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485329 ] 00:09:21.462 [2024-12-05 13:41:03.816359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.462 [2024-12-05 13:41:03.856347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.462 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:22.399 [2024-12-05T12:41:04.986Z] ====================================== 00:09:22.399 [2024-12-05T12:41:04.986Z] busy:2101439698 (cyc) 00:09:22.399 [2024-12-05T12:41:04.986Z] total_run_count: 5611000 00:09:22.399 [2024-12-05T12:41:04.986Z] tsc_hz: 2100000000 (cyc) 00:09:22.399 [2024-12-05T12:41:04.986Z] ====================================== 00:09:22.399 [2024-12-05T12:41:04.986Z] poller_cost: 374 (cyc), 178 (nsec) 00:09:22.399 00:09:22.399 real 0m1.178s 00:09:22.399 user 0m1.102s 00:09:22.399 sys 0m0.072s 00:09:22.399 13:41:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.399 13:41:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:22.399 ************************************ 00:09:22.399 END TEST thread_poller_perf 00:09:22.399 ************************************ 00:09:22.399 13:41:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:22.399 00:09:22.399 real 0m2.662s 00:09:22.399 user 0m2.381s 00:09:22.399 sys 0m0.295s 00:09:22.399 13:41:04 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.399 13:41:04 thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.399 ************************************ 00:09:22.399 END TEST thread 00:09:22.399 ************************************ 00:09:22.399 13:41:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:22.399 13:41:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:22.399 13:41:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.399 13:41:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.399 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:09:22.665 ************************************ 00:09:22.665 START TEST app_cmdline 00:09:22.665 ************************************ 00:09:22.665 13:41:04 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:22.665 * Looking for test storage... 00:09:22.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:22.665 13:41:05 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:22.665 13:41:05 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:22.665 13:41:05 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:22.665 13:41:05 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.665 13:41:05 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:22.665 13:41:05 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.665 13:41:05 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:22.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.665 --rc genhtml_branch_coverage=1 00:09:22.665 --rc genhtml_function_coverage=1 00:09:22.665 --rc genhtml_legend=1 00:09:22.665 --rc geninfo_all_blocks=1 00:09:22.665 --rc geninfo_unexecuted_blocks=1 00:09:22.665 00:09:22.665 ' 00:09:22.666 13:41:05 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:22.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.666 --rc genhtml_branch_coverage=1 00:09:22.666 --rc genhtml_function_coverage=1 00:09:22.666 --rc genhtml_legend=1 00:09:22.666 --rc geninfo_all_blocks=1 00:09:22.666 --rc geninfo_unexecuted_blocks=1 00:09:22.666 00:09:22.666 ' 00:09:22.666 13:41:05 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:22.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.666 --rc genhtml_branch_coverage=1 00:09:22.666 --rc genhtml_function_coverage=1 00:09:22.666 --rc genhtml_legend=1 00:09:22.666 --rc geninfo_all_blocks=1 00:09:22.666 --rc geninfo_unexecuted_blocks=1 00:09:22.666 00:09:22.666 ' 00:09:22.666 13:41:05 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:22.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.666 --rc genhtml_branch_coverage=1 00:09:22.666 --rc genhtml_function_coverage=1 00:09:22.666 --rc genhtml_legend=1 00:09:22.666 --rc geninfo_all_blocks=1 00:09:22.666 --rc geninfo_unexecuted_blocks=1 00:09:22.666 00:09:22.666 ' 00:09:22.666 13:41:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:22.666 13:41:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=485624 00:09:22.666 13:41:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 485624 00:09:22.666 13:41:05 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:22.666 13:41:05 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 485624 ']' 00:09:22.666 13:41:05 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.666 13:41:05 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.666 13:41:05 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.667 13:41:05 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.667 13:41:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:22.667 [2024-12-05 13:41:05.220504] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:22.667 [2024-12-05 13:41:05.220551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485624 ] 00:09:22.933 [2024-12-05 13:41:05.295947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.933 [2024-12-05 13:41:05.337786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.191 13:41:05 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.191 13:41:05 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:23.191 13:41:05 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:23.191 { 00:09:23.191 "version": "SPDK v25.01-pre git sha1 2cae84b3c", 00:09:23.191 "fields": { 00:09:23.192 "major": 25, 00:09:23.192 "minor": 1, 00:09:23.192 "patch": 0, 00:09:23.192 "suffix": "-pre", 00:09:23.192 "commit": "2cae84b3c" 00:09:23.192 } 00:09:23.192 } 00:09:23.192 13:41:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:23.192 13:41:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:23.192 13:41:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:23.192 13:41:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:23.192 13:41:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:23.192 13:41:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.192 13:41:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.192 13:41:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:23.192 13:41:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:23.192 13:41:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:23.192 13:41:05 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:23.450 request: 00:09:23.450 { 00:09:23.450 "method": "env_dpdk_get_mem_stats", 00:09:23.450 "req_id": 1 00:09:23.450 } 00:09:23.450 Got JSON-RPC error response 00:09:23.450 response: 00:09:23.450 { 00:09:23.450 "code": -32601, 00:09:23.450 "message": "Method not found" 00:09:23.450 } 00:09:23.450 13:41:05 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:23.450 13:41:05 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:23.450 13:41:05 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:23.450 13:41:05 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:23.450 13:41:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 485624 00:09:23.450 13:41:05 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 485624 ']' 00:09:23.450 13:41:05 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 485624 00:09:23.450 13:41:05 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:23.450 13:41:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.450 13:41:05 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 485624 00:09:23.450 13:41:06 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.450 13:41:06 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.450 13:41:06 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 485624' 00:09:23.450 killing process with pid 485624 00:09:23.450 13:41:06 app_cmdline -- common/autotest_common.sh@973 -- # kill 485624 00:09:23.450 13:41:06 app_cmdline -- common/autotest_common.sh@978 -- # wait 485624 00:09:24.018 00:09:24.018 real 0m1.310s 00:09:24.018 user 0m1.508s 00:09:24.018 sys 0m0.451s 00:09:24.018 13:41:06 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.018 13:41:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:24.018 ************************************ 00:09:24.018 END TEST app_cmdline 00:09:24.018 ************************************ 00:09:24.018 13:41:06 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:24.018 13:41:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.018 13:41:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.018 13:41:06 -- common/autotest_common.sh@10 -- # set +x 00:09:24.018 ************************************ 00:09:24.018 START TEST version 00:09:24.018 ************************************ 00:09:24.018 13:41:06 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:24.018 * Looking for test storage... 00:09:24.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:24.018 13:41:06 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.018 13:41:06 version -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.018 13:41:06 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.018 13:41:06 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.018 13:41:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.018 13:41:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.018 13:41:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.018 13:41:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.018 13:41:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.018 13:41:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.018 13:41:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.018 13:41:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.018 13:41:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.018 13:41:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.018 13:41:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.018 13:41:06 version -- scripts/common.sh@344 -- # case "$op" in 00:09:24.018 13:41:06 version -- scripts/common.sh@345 -- # : 1 00:09:24.018 13:41:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.018 13:41:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.018 13:41:06 version -- scripts/common.sh@365 -- # decimal 1 00:09:24.018 13:41:06 version -- scripts/common.sh@353 -- # local d=1 00:09:24.018 13:41:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.018 13:41:06 version -- scripts/common.sh@355 -- # echo 1 00:09:24.018 13:41:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.018 13:41:06 version -- scripts/common.sh@366 -- # decimal 2 00:09:24.018 13:41:06 version -- scripts/common.sh@353 -- # local d=2 00:09:24.018 13:41:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.018 13:41:06 version -- scripts/common.sh@355 -- # echo 2 00:09:24.018 13:41:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.018 13:41:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.018 13:41:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.018 13:41:06 version -- scripts/common.sh@368 -- # return 0 00:09:24.018 13:41:06 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.018 13:41:06 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.018 --rc genhtml_branch_coverage=1 00:09:24.018 --rc genhtml_function_coverage=1 00:09:24.018 --rc genhtml_legend=1 00:09:24.018 --rc geninfo_all_blocks=1 00:09:24.018 --rc geninfo_unexecuted_blocks=1 00:09:24.018 00:09:24.018 ' 00:09:24.018 13:41:06 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.018 --rc genhtml_branch_coverage=1 00:09:24.018 --rc genhtml_function_coverage=1 00:09:24.018 --rc genhtml_legend=1 00:09:24.018 --rc geninfo_all_blocks=1 00:09:24.018 --rc geninfo_unexecuted_blocks=1 00:09:24.018 00:09:24.018 ' 00:09:24.018 13:41:06 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.019 --rc genhtml_branch_coverage=1 00:09:24.019 --rc genhtml_function_coverage=1 00:09:24.019 --rc genhtml_legend=1 00:09:24.019 --rc geninfo_all_blocks=1 00:09:24.019 --rc geninfo_unexecuted_blocks=1 00:09:24.019 00:09:24.019 ' 00:09:24.019 13:41:06 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.019 --rc genhtml_branch_coverage=1 00:09:24.019 --rc genhtml_function_coverage=1 00:09:24.019 --rc genhtml_legend=1 00:09:24.019 --rc geninfo_all_blocks=1 00:09:24.019 --rc geninfo_unexecuted_blocks=1 00:09:24.019 00:09:24.019 ' 00:09:24.019 13:41:06 version -- app/version.sh@17 -- # get_header_version major 00:09:24.019 13:41:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:24.019 13:41:06 version -- app/version.sh@14 -- # cut -f2 00:09:24.019 13:41:06 version -- app/version.sh@14 -- # tr -d '"' 00:09:24.019 13:41:06 version -- app/version.sh@17 -- # major=25 00:09:24.019 13:41:06 version -- app/version.sh@18 -- # get_header_version minor 00:09:24.019 13:41:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:24.019 13:41:06 version -- app/version.sh@14 -- # cut -f2 00:09:24.019 13:41:06 version -- app/version.sh@14 -- # tr -d '"' 00:09:24.019 13:41:06 version -- app/version.sh@18 -- # minor=1 00:09:24.019 13:41:06 version -- app/version.sh@19 -- # get_header_version patch 00:09:24.019 13:41:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:24.019 13:41:06 version -- app/version.sh@14 -- # cut -f2 00:09:24.019 13:41:06 version -- app/version.sh@14 -- # tr -d '"' 00:09:24.019 13:41:06 version -- app/version.sh@19 -- # patch=0 00:09:24.019 13:41:06 version -- app/version.sh@20 -- # get_header_version suffix 00:09:24.019 13:41:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:24.019 13:41:06 version -- app/version.sh@14 -- # cut -f2 00:09:24.019 13:41:06 version -- app/version.sh@14 -- # tr -d '"' 00:09:24.019 13:41:06 version -- app/version.sh@20 -- # suffix=-pre 00:09:24.019 13:41:06 version -- app/version.sh@22 -- # version=25.1 00:09:24.019 13:41:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:24.019 13:41:06 version -- app/version.sh@28 -- # version=25.1rc0 00:09:24.019 13:41:06 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:24.019 13:41:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:24.277 13:41:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:24.277 13:41:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:24.277 00:09:24.277 real 0m0.242s 00:09:24.277 user 0m0.158s 00:09:24.277 sys 0m0.127s 00:09:24.277 13:41:06 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.277 13:41:06 version -- common/autotest_common.sh@10 -- # set +x 00:09:24.277 ************************************ 00:09:24.277 END TEST version 00:09:24.277 ************************************ 00:09:24.277 13:41:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:24.277 13:41:06 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:24.277 13:41:06 -- spdk/autotest.sh@194 -- # uname -s 00:09:24.277 13:41:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:24.277 13:41:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:24.277 13:41:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:24.277 13:41:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:24.277 13:41:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:24.277 13:41:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:24.277 13:41:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.277 13:41:06 -- common/autotest_common.sh@10 -- # set +x 00:09:24.277 13:41:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:24.277 13:41:06 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:24.277 13:41:06 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:24.277 13:41:06 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:24.277 13:41:06 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:24.277 13:41:06 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:24.277 13:41:06 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:24.277 13:41:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.277 13:41:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.278 13:41:06 -- common/autotest_common.sh@10 -- # set +x 00:09:24.278 ************************************ 00:09:24.278 START TEST nvmf_tcp 00:09:24.278 ************************************ 00:09:24.278 13:41:06 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:24.278 * Looking for test storage... 00:09:24.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:24.278 13:41:06 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.278 13:41:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.278 13:41:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.537 13:41:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.537 13:41:06 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:24.537 13:41:06 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.537 13:41:06 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.537 --rc genhtml_branch_coverage=1 00:09:24.537 --rc genhtml_function_coverage=1 00:09:24.537 --rc genhtml_legend=1 00:09:24.537 --rc geninfo_all_blocks=1 00:09:24.537 --rc geninfo_unexecuted_blocks=1 00:09:24.537 00:09:24.537 ' 00:09:24.537 13:41:06 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.537 --rc genhtml_branch_coverage=1 00:09:24.537 --rc genhtml_function_coverage=1 00:09:24.537 --rc genhtml_legend=1 00:09:24.537 --rc geninfo_all_blocks=1 00:09:24.537 --rc geninfo_unexecuted_blocks=1 00:09:24.537 00:09:24.537 ' 00:09:24.537 13:41:06 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.537 --rc genhtml_branch_coverage=1 00:09:24.537 --rc genhtml_function_coverage=1 00:09:24.537 --rc genhtml_legend=1 00:09:24.537 --rc geninfo_all_blocks=1 00:09:24.537 --rc geninfo_unexecuted_blocks=1 00:09:24.537 00:09:24.537 ' 00:09:24.537 13:41:06 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.537 --rc genhtml_branch_coverage=1 00:09:24.537 --rc genhtml_function_coverage=1 00:09:24.537 --rc genhtml_legend=1 00:09:24.537 --rc geninfo_all_blocks=1 00:09:24.537 --rc geninfo_unexecuted_blocks=1 00:09:24.537 00:09:24.537 ' 00:09:24.537 13:41:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:24.537 13:41:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:24.537 13:41:06 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:24.537 13:41:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.537 13:41:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.537 13:41:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:24.537 ************************************ 00:09:24.537 START TEST nvmf_target_core 00:09:24.537 ************************************ 00:09:24.537 13:41:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:24.537 * Looking for test storage... 00:09:24.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.537 --rc genhtml_branch_coverage=1 00:09:24.537 --rc genhtml_function_coverage=1 00:09:24.537 --rc genhtml_legend=1 00:09:24.537 --rc geninfo_all_blocks=1 00:09:24.537 --rc geninfo_unexecuted_blocks=1 00:09:24.537 00:09:24.537 ' 00:09:24.537 13:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.537 --rc genhtml_branch_coverage=1 00:09:24.537 --rc genhtml_function_coverage=1 00:09:24.537 --rc genhtml_legend=1 00:09:24.538 --rc geninfo_all_blocks=1 00:09:24.538 --rc geninfo_unexecuted_blocks=1 00:09:24.538 00:09:24.538 ' 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.538 --rc genhtml_branch_coverage=1 00:09:24.538 --rc genhtml_function_coverage=1 00:09:24.538 --rc genhtml_legend=1 00:09:24.538 --rc geninfo_all_blocks=1 00:09:24.538 --rc geninfo_unexecuted_blocks=1 00:09:24.538 00:09:24.538 ' 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.538 --rc genhtml_branch_coverage=1 00:09:24.538 --rc genhtml_function_coverage=1 00:09:24.538 --rc genhtml_legend=1 00:09:24.538 --rc geninfo_all_blocks=1 00:09:24.538 --rc geninfo_unexecuted_blocks=1 00:09:24.538 00:09:24.538 ' 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.538 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.798 ************************************ 00:09:24.798 START TEST nvmf_abort 00:09:24.798 ************************************ 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:24.798 * Looking for test storage... 00:09:24.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:24.798 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.799 --rc genhtml_branch_coverage=1 00:09:24.799 --rc genhtml_function_coverage=1 00:09:24.799 --rc genhtml_legend=1 00:09:24.799 --rc geninfo_all_blocks=1 00:09:24.799 --rc geninfo_unexecuted_blocks=1 00:09:24.799 00:09:24.799 ' 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.799 --rc genhtml_branch_coverage=1 00:09:24.799 --rc genhtml_function_coverage=1 00:09:24.799 --rc genhtml_legend=1 00:09:24.799 --rc geninfo_all_blocks=1 00:09:24.799 --rc geninfo_unexecuted_blocks=1 00:09:24.799 00:09:24.799 ' 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.799 --rc genhtml_branch_coverage=1 00:09:24.799 --rc genhtml_function_coverage=1 00:09:24.799 --rc genhtml_legend=1 00:09:24.799 --rc geninfo_all_blocks=1 00:09:24.799 --rc geninfo_unexecuted_blocks=1 00:09:24.799 00:09:24.799 ' 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.799 --rc genhtml_branch_coverage=1 00:09:24.799 --rc genhtml_function_coverage=1 00:09:24.799 --rc genhtml_legend=1 00:09:24.799 --rc geninfo_all_blocks=1 00:09:24.799 --rc geninfo_unexecuted_blocks=1 00:09:24.799 00:09:24.799 ' 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.799 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.058 13:41:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:31.623 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:31.623 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:31.623 Found net devices under 0000:86:00.0: cvl_0_0 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:31.623 Found net devices under 0000:86:00.1: cvl_0_1 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:09:31.623 00:09:31.623 --- 10.0.0.2 ping statistics --- 00:09:31.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.623 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:09:31.623 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:09:31.623 00:09:31.623 --- 10.0.0.1 ping statistics --- 00:09:31.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.623 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=489304 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 489304 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 489304 ']' 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.624 [2024-12-05 13:41:13.477157] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:31.624 [2024-12-05 13:41:13.477205] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.624 [2024-12-05 13:41:13.557302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:31.624 [2024-12-05 13:41:13.600057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.624 [2024-12-05 13:41:13.600093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.624 [2024-12-05 13:41:13.600100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.624 [2024-12-05 13:41:13.600106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.624 [2024-12-05 13:41:13.600111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.624 [2024-12-05 13:41:13.601492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.624 [2024-12-05 13:41:13.601603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.624 [2024-12-05 13:41:13.601603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.624 [2024-12-05 13:41:13.737392] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.624 Malloc0 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.624 Delay0 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.624 [2024-12-05 13:41:13.807947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.624 13:41:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:31.624 [2024-12-05 13:41:13.945012] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:33.529 Initializing NVMe Controllers 00:09:33.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:33.529 controller IO queue size 128 less than required 00:09:33.529 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:33.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:33.529 Initialization complete. Launching workers. 00:09:33.529 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37394 00:09:33.529 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37459, failed to submit 62 00:09:33.529 success 37398, unsuccessful 61, failed 0 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.530 rmmod nvme_tcp 00:09:33.530 rmmod nvme_fabrics 00:09:33.530 rmmod nvme_keyring 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 489304 ']' 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 489304 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 489304 ']' 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 489304 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.530 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 489304 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 489304' 00:09:33.789 killing process with pid 489304 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 489304 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 489304 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.789 13:41:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.324 00:09:36.324 real 0m11.213s 00:09:36.324 user 0m11.525s 00:09:36.324 sys 0m5.472s 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:36.324 ************************************ 00:09:36.324 END TEST nvmf_abort 00:09:36.324 ************************************ 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.324 ************************************ 00:09:36.324 START TEST nvmf_ns_hotplug_stress 00:09:36.324 ************************************ 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:36.324 * Looking for test storage... 00:09:36.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.324 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:36.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.325 --rc genhtml_branch_coverage=1 00:09:36.325 --rc genhtml_function_coverage=1 00:09:36.325 --rc genhtml_legend=1 00:09:36.325 --rc geninfo_all_blocks=1 00:09:36.325 --rc geninfo_unexecuted_blocks=1 00:09:36.325 00:09:36.325 ' 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:36.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.325 --rc genhtml_branch_coverage=1 00:09:36.325 --rc genhtml_function_coverage=1 00:09:36.325 --rc genhtml_legend=1 00:09:36.325 --rc geninfo_all_blocks=1 00:09:36.325 --rc geninfo_unexecuted_blocks=1 00:09:36.325 00:09:36.325 ' 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:36.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.325 --rc genhtml_branch_coverage=1 00:09:36.325 --rc genhtml_function_coverage=1 00:09:36.325 --rc genhtml_legend=1 00:09:36.325 --rc geninfo_all_blocks=1 00:09:36.325 --rc geninfo_unexecuted_blocks=1 00:09:36.325 00:09:36.325 ' 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:36.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.325 --rc genhtml_branch_coverage=1 00:09:36.325 --rc genhtml_function_coverage=1 00:09:36.325 --rc genhtml_legend=1 00:09:36.325 --rc geninfo_all_blocks=1 00:09:36.325 --rc geninfo_unexecuted_blocks=1 00:09:36.325 00:09:36.325 ' 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.325 13:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:42.915 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:42.915 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:42.915 Found net devices under 0000:86:00.0: cvl_0_0 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:42.915 Found net devices under 0000:86:00.1: cvl_0_1 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:42.915 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:42.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:09:42.915 00:09:42.915 --- 10.0.0.2 ping statistics --- 00:09:42.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.915 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:09:42.916 00:09:42.916 --- 10.0.0.1 ping statistics --- 00:09:42.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.916 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=493329 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 493329 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 493329 ']' 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.916 [2024-12-05 13:41:24.721785] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:09:42.916 [2024-12-05 13:41:24.721835] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.916 [2024-12-05 13:41:24.800761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:42.916 [2024-12-05 13:41:24.841946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.916 [2024-12-05 13:41:24.841982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.916 [2024-12-05 13:41:24.841989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.916 [2024-12-05 13:41:24.841995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.916 [2024-12-05 13:41:24.842000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.916 [2024-12-05 13:41:24.843429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.916 [2024-12-05 13:41:24.843536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.916 [2024-12-05 13:41:24.843537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:42.916 13:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:42.916 [2024-12-05 13:41:25.144845] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.916 13:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:42.916 13:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.174 [2024-12-05 13:41:25.562306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.174 13:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.432 13:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:43.432 Malloc0 00:09:43.432 13:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:43.691 Delay0 00:09:43.691 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.950 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:44.208 NULL1 00:09:44.208 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:44.208 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=493800 00:09:44.208 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:44.208 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:44.208 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.466 Read completed with error (sct=0, sc=11) 00:09:44.466 13:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.723 13:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:44.723 13:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:44.982 true 00:09:44.982 13:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:44.982 13:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.916 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.916 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:45.916 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:46.175 true 00:09:46.175 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:46.175 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.175 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.433 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:46.434 13:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:46.692 true 00:09:46.692 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:46.692 13:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.885 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.885 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:47.885 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:48.142 true 00:09:48.142 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:48.142 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.400 13:41:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.657 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:48.657 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:48.657 true 00:09:48.657 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:48.657 13:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.032 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.032 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:50.032 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:50.291 true 00:09:50.291 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:50.291 13:41:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.226 13:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.226 13:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:51.226 13:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:51.483 true 00:09:51.483 13:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:51.483 13:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.741 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.000 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:52.000 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:52.000 true 00:09:52.000 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:52.000 13:41:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.373 13:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.373 13:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:53.373 13:41:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:53.630 true 00:09:53.630 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:53.631 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.563 13:41:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.563 13:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:54.563 13:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:54.820 true 00:09:54.820 13:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:54.820 13:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.820 13:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.077 13:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:55.077 13:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:55.334 true 00:09:55.334 13:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:55.334 13:41:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.706 13:41:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.706 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:56.707 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:56.964 true 00:09:56.964 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:56.964 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.964 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.222 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:57.222 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:57.479 true 00:09:57.479 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:57.479 13:41:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.875 13:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.875 13:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:58.875 13:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:59.133 true 00:09:59.133 13:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:09:59.133 13:41:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.067 13:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.067 13:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:00.067 13:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:00.325 true 00:10:00.325 13:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:00.325 13:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.325 13:41:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.583 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:00.583 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:00.840 true 00:10:00.840 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:00.840 13:41:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.774 13:41:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.032 13:41:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:02.032 13:41:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:02.290 true 00:10:02.290 13:41:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:02.290 13:41:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.332 13:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.332 13:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:03.332 13:41:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:03.624 true 00:10:03.624 13:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:03.624 13:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.882 13:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.882 13:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:03.882 13:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:04.139 true 00:10:04.139 13:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:04.139 13:41:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.073 13:41:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.331 13:41:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:05.331 13:41:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:05.589 true 00:10:05.589 13:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:05.589 13:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.846 13:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.846 13:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:05.846 13:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:06.104 true 00:10:06.104 13:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:06.104 13:41:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.478 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.478 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:07.478 13:41:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:07.736 true 00:10:07.736 13:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:07.736 13:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.671 13:41:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.671 13:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:08.671 13:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:08.930 true 00:10:08.930 13:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:08.930 13:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.930 13:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.187 13:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:09.187 13:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:09.444 true 00:10:09.444 13:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:09.444 13:41:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.375 13:41:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.631 13:41:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:10.631 13:41:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:10.887 true 00:10:10.887 13:41:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:10.887 13:41:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.816 13:41:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.816 13:41:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:11.816 13:41:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:12.074 true 00:10:12.074 13:41:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:12.074 13:41:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.333 13:41:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.592 13:41:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:12.592 13:41:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:12.592 true 00:10:12.592 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:12.592 13:41:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.966 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.967 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:13.967 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:14.225 true 00:10:14.225 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:14.225 13:41:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.161 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.161 Initializing NVMe Controllers 00:10:15.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:15.161 Controller IO queue size 128, less than required. 00:10:15.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:15.161 Controller IO queue size 128, less than required. 00:10:15.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:15.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:15.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:15.161 Initialization complete. Launching workers. 00:10:15.161 ======================================================== 00:10:15.161 Latency(us) 00:10:15.161 Device Information : IOPS MiB/s Average min max 00:10:15.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2016.57 0.98 43802.75 1955.76 1012953.32 00:10:15.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17663.40 8.62 7246.24 2133.14 443458.52 00:10:15.161 ======================================================== 00:10:15.161 Total : 19679.97 9.61 10992.11 1955.76 1012953.32 00:10:15.161 00:10:15.162 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:15.162 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:15.420 true 00:10:15.420 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 493800 00:10:15.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (493800) - No such process 00:10:15.420 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 493800 00:10:15.420 13:41:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.712 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:15.970 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:15.970 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:15.970 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:15.970 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:15.970 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:15.970 null0 00:10:15.970 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:15.970 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:15.970 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:16.228 null1 00:10:16.228 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:16.228 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:16.228 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:16.487 null2 00:10:16.487 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:16.487 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:16.487 13:41:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:16.487 null3 00:10:16.746 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:16.746 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:16.746 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:16.746 null4 00:10:16.746 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:16.746 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:16.746 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:17.004 null5 00:10:17.004 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:17.004 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:17.005 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:17.263 null6 00:10:17.263 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:17.264 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:17.264 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:17.523 null7 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:17.523 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 499432 499433 499435 499437 499439 499441 499443 499445 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.524 13:41:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:17.524 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:17.524 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:17.524 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:17.524 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.524 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:17.524 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:17.524 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.783 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:18.042 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:18.042 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.042 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:18.042 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.042 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.042 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:18.042 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.042 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.301 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.302 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:18.302 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:18.302 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.302 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.302 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:18.302 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.302 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.302 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:18.561 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.561 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:18.561 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:18.562 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.562 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.562 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:18.562 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.562 13:42:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.562 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:18.820 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.820 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.820 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:18.820 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.820 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:18.821 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:18.821 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:18.821 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:18.821 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.821 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.821 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.079 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:19.080 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.080 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.080 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:19.338 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:19.338 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:19.338 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:19.338 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:19.338 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:19.338 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.338 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:19.338 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.598 13:42:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:19.598 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:19.598 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:19.598 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.857 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.858 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:19.858 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.858 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.858 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:19.858 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.858 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.858 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:20.117 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:20.117 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:20.117 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:20.117 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.117 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:20.117 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:20.117 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:20.117 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.375 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.375 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.376 13:42:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:20.635 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:20.635 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.635 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:20.635 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:20.635 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:20.635 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:20.635 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.635 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:20.635 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.635 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.635 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:20.893 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:20.894 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.153 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:21.412 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:21.412 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:21.412 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.412 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:21.412 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:21.412 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:21.412 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:21.412 13:42:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.672 rmmod nvme_tcp 00:10:21.672 rmmod nvme_fabrics 00:10:21.672 rmmod nvme_keyring 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 493329 ']' 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 493329 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 493329 ']' 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 493329 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493329 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493329' 00:10:21.672 killing process with pid 493329 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 493329 00:10:21.672 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 493329 00:10:21.932 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:21.932 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:21.932 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:21.932 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:10:21.932 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:21.932 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:10:21.932 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:10:21.932 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:21.932 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:21.932 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.932 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.932 13:42:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.837 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.095 00:10:24.095 real 0m47.957s 00:10:24.095 user 3m14.577s 00:10:24.095 sys 0m15.570s 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.095 ************************************ 00:10:24.095 END TEST nvmf_ns_hotplug_stress 00:10:24.095 ************************************ 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:24.095 ************************************ 00:10:24.095 START TEST nvmf_delete_subsystem 00:10:24.095 ************************************ 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:24.095 * Looking for test storage... 00:10:24.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.095 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.096 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:10:24.096 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:10:24.096 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.096 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:10:24.096 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.096 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:10:24.096 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:10:24.096 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.096 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:10:24.355 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.355 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.355 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.355 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:10:24.355 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.355 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:24.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.355 --rc genhtml_branch_coverage=1 00:10:24.355 --rc genhtml_function_coverage=1 00:10:24.355 --rc genhtml_legend=1 00:10:24.355 --rc geninfo_all_blocks=1 00:10:24.355 --rc geninfo_unexecuted_blocks=1 00:10:24.355 00:10:24.355 ' 00:10:24.355 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:24.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.355 --rc genhtml_branch_coverage=1 00:10:24.355 --rc genhtml_function_coverage=1 00:10:24.355 --rc genhtml_legend=1 00:10:24.355 --rc geninfo_all_blocks=1 00:10:24.355 --rc geninfo_unexecuted_blocks=1 00:10:24.355 00:10:24.355 ' 00:10:24.355 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:24.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.355 --rc genhtml_branch_coverage=1 00:10:24.355 --rc genhtml_function_coverage=1 00:10:24.355 --rc genhtml_legend=1 00:10:24.355 --rc geninfo_all_blocks=1 00:10:24.355 --rc geninfo_unexecuted_blocks=1 00:10:24.355 00:10:24.355 ' 00:10:24.355 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:24.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.355 --rc genhtml_branch_coverage=1 00:10:24.355 --rc genhtml_function_coverage=1 00:10:24.355 --rc genhtml_legend=1 00:10:24.355 --rc geninfo_all_blocks=1 00:10:24.355 --rc geninfo_unexecuted_blocks=1 00:10:24.355 00:10:24.355 ' 00:10:24.355 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.355 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:24.355 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.355 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:24.356 13:42:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:30.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:30.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:30.920 Found net devices under 0000:86:00.0: cvl_0_0 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:30.920 Found net devices under 0000:86:00.1: cvl_0_1 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:30.920 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:30.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:10:30.921 00:10:30.921 --- 10.0.0.2 ping statistics --- 00:10:30.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.921 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:10:30.921 00:10:30.921 --- 10.0.0.1 ping statistics --- 00:10:30.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.921 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=504334 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 504334 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 504334 ']' 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 [2024-12-05 13:42:12.752538] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:10:30.921 [2024-12-05 13:42:12.752590] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.921 [2024-12-05 13:42:12.832687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:30.921 [2024-12-05 13:42:12.873756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.921 [2024-12-05 13:42:12.873793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.921 [2024-12-05 13:42:12.873801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.921 [2024-12-05 13:42:12.873808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.921 [2024-12-05 13:42:12.873814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.921 [2024-12-05 13:42:12.874988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.921 [2024-12-05 13:42:12.874991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.921 13:42:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 [2024-12-05 13:42:13.010978] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 [2024-12-05 13:42:13.031170] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 NULL1 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 Delay0 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=504362 00:10:30.921 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:30.922 13:42:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:30.922 [2024-12-05 13:42:13.142880] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:32.827 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.827 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.827 13:42:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 [2024-12-05 13:42:15.257898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9860 is same with the state(6) to be set 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Write completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.827 starting I/O failed: -6 00:10:32.827 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 starting I/O failed: -6 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 starting I/O failed: -6 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 starting I/O failed: -6 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 starting I/O failed: -6 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 starting I/O failed: -6 00:10:32.828 [2024-12-05 13:42:15.262912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f100400d4b0 is same with the state(6) to be set 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Read completed with error (sct=0, sc=8) 00:10:32.828 Write completed with error (sct=0, sc=8) 00:10:33.763 [2024-12-05 13:42:16.237045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdca9b0 is same with the state(6) to be set 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 [2024-12-05 13:42:16.260602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9680 is same with the state(6) to be set 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 [2024-12-05 13:42:16.260916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc92c0 is same with the state(6) to be set 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 [2024-12-05 13:42:16.265350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f100400d7e0 is same with the state(6) to be set 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Write completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 Read completed with error (sct=0, sc=8) 00:10:33.763 [2024-12-05 13:42:16.265918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f100400d020 is same with the state(6) to be set 00:10:33.763 Initializing NVMe Controllers 00:10:33.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:33.763 Controller IO queue size 128, less than required. 00:10:33.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:33.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:33.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:33.763 Initialization complete. Launching workers. 00:10:33.763 ======================================================== 00:10:33.763 Latency(us) 00:10:33.763 Device Information : IOPS MiB/s Average min max 00:10:33.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.34 0.08 900081.24 348.63 1042574.13 00:10:33.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.34 0.08 909013.79 241.57 2001355.93 00:10:33.764 ======================================================== 00:10:33.764 Total : 335.68 0.16 904534.26 241.57 2001355.93 00:10:33.764 00:10:33.764 [2024-12-05 13:42:16.266458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdca9b0 (9): Bad file descriptor 00:10:33.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:33.764 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.764 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:33.764 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 504362 00:10:33.764 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 504362 00:10:34.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (504362) - No such process 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 504362 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 504362 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 504362 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:34.330 [2024-12-05 13:42:16.798758] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=505054 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 505054 00:10:34.330 13:42:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:34.330 [2024-12-05 13:42:16.884474] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:34.895 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:34.895 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 505054 00:10:34.895 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:35.460 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:35.460 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 505054 00:10:35.460 13:42:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:36.027 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:36.027 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 505054 00:10:36.027 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:36.284 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:36.284 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 505054 00:10:36.284 13:42:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:36.851 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:36.851 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 505054 00:10:36.851 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:37.417 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:37.417 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 505054 00:10:37.417 13:42:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:37.675 Initializing NVMe Controllers 00:10:37.675 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:37.675 Controller IO queue size 128, less than required. 00:10:37.675 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:37.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:37.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:37.675 Initialization complete. Launching workers. 00:10:37.675 ======================================================== 00:10:37.675 Latency(us) 00:10:37.675 Device Information : IOPS MiB/s Average min max 00:10:37.675 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002100.76 1000107.85 1005982.47 00:10:37.675 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004643.34 1000217.26 1042816.61 00:10:37.675 ======================================================== 00:10:37.675 Total : 256.00 0.12 1003372.05 1000107.85 1042816.61 00:10:37.675 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 505054 00:10:37.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (505054) - No such process 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 505054 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.934 rmmod nvme_tcp 00:10:37.934 rmmod nvme_fabrics 00:10:37.934 rmmod nvme_keyring 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 504334 ']' 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 504334 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 504334 ']' 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 504334 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 504334 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 504334' 00:10:37.934 killing process with pid 504334 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 504334 00:10:37.934 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 504334 00:10:38.194 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.194 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.194 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.194 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:10:38.194 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:10:38.194 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.194 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.194 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.194 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:38.194 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.194 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.194 13:42:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.098 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.357 00:10:40.357 real 0m16.185s 00:10:40.357 user 0m29.222s 00:10:40.357 sys 0m5.485s 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:40.357 ************************************ 00:10:40.357 END TEST nvmf_delete_subsystem 00:10:40.357 ************************************ 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:40.357 ************************************ 00:10:40.357 START TEST nvmf_host_management 00:10:40.357 ************************************ 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:40.357 * Looking for test storage... 00:10:40.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.357 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:40.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.357 --rc genhtml_branch_coverage=1 00:10:40.357 --rc genhtml_function_coverage=1 00:10:40.357 --rc genhtml_legend=1 00:10:40.357 --rc geninfo_all_blocks=1 00:10:40.357 --rc geninfo_unexecuted_blocks=1 00:10:40.357 00:10:40.358 ' 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:40.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.358 --rc genhtml_branch_coverage=1 00:10:40.358 --rc genhtml_function_coverage=1 00:10:40.358 --rc genhtml_legend=1 00:10:40.358 --rc geninfo_all_blocks=1 00:10:40.358 --rc geninfo_unexecuted_blocks=1 00:10:40.358 00:10:40.358 ' 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:40.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.358 --rc genhtml_branch_coverage=1 00:10:40.358 --rc genhtml_function_coverage=1 00:10:40.358 --rc genhtml_legend=1 00:10:40.358 --rc geninfo_all_blocks=1 00:10:40.358 --rc geninfo_unexecuted_blocks=1 00:10:40.358 00:10:40.358 ' 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:40.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.358 --rc genhtml_branch_coverage=1 00:10:40.358 --rc genhtml_function_coverage=1 00:10:40.358 --rc genhtml_legend=1 00:10:40.358 --rc geninfo_all_blocks=1 00:10:40.358 --rc geninfo_unexecuted_blocks=1 00:10:40.358 00:10:40.358 ' 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.358 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.617 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:40.617 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:40.617 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.617 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.617 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.617 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.617 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:10:40.618 13:42:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:47.182 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:47.182 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.182 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:47.183 Found net devices under 0000:86:00.0: cvl_0_0 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:47.183 Found net devices under 0000:86:00.1: cvl_0_1 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:47.183 13:42:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:47.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:10:47.183 00:10:47.183 --- 10.0.0.2 ping statistics --- 00:10:47.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.183 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:10:47.183 00:10:47.183 --- 10.0.0.1 ping statistics --- 00:10:47.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.183 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=509281 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 509281 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 509281 ']' 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.183 [2024-12-05 13:42:29.104445] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:10:47.183 [2024-12-05 13:42:29.104489] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.183 [2024-12-05 13:42:29.181682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.183 [2024-12-05 13:42:29.224812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.183 [2024-12-05 13:42:29.224847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.183 [2024-12-05 13:42:29.224854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.183 [2024-12-05 13:42:29.224863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.183 [2024-12-05 13:42:29.224868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.183 [2024-12-05 13:42:29.226531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.183 [2024-12-05 13:42:29.226558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.183 [2024-12-05 13:42:29.226685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.183 [2024-12-05 13:42:29.226686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.183 [2024-12-05 13:42:29.364558] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:47.183 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.184 Malloc0 00:10:47.184 [2024-12-05 13:42:29.438122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=509326 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 509326 /var/tmp/bdevperf.sock 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 509326 ']' 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:47.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:47.184 { 00:10:47.184 "params": { 00:10:47.184 "name": "Nvme$subsystem", 00:10:47.184 "trtype": "$TEST_TRANSPORT", 00:10:47.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:47.184 "adrfam": "ipv4", 00:10:47.184 "trsvcid": "$NVMF_PORT", 00:10:47.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:47.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:47.184 "hdgst": ${hdgst:-false}, 00:10:47.184 "ddgst": ${ddgst:-false} 00:10:47.184 }, 00:10:47.184 "method": "bdev_nvme_attach_controller" 00:10:47.184 } 00:10:47.184 EOF 00:10:47.184 )") 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:47.184 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:47.184 "params": { 00:10:47.184 "name": "Nvme0", 00:10:47.184 "trtype": "tcp", 00:10:47.184 "traddr": "10.0.0.2", 00:10:47.184 "adrfam": "ipv4", 00:10:47.184 "trsvcid": "4420", 00:10:47.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:47.184 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:47.184 "hdgst": false, 00:10:47.184 "ddgst": false 00:10:47.184 }, 00:10:47.184 "method": "bdev_nvme_attach_controller" 00:10:47.184 }' 00:10:47.184 [2024-12-05 13:42:29.535134] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:10:47.184 [2024-12-05 13:42:29.535177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509326 ] 00:10:47.184 [2024-12-05 13:42:29.607985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.184 [2024-12-05 13:42:29.648837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.442 Running I/O for 10 seconds... 00:10:47.442 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.442 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:47.442 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:47.442 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.442 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.442 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.442 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:47.442 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=80 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 80 -ge 100 ']' 00:10:47.443 13:42:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:47.701 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:47.701 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:47.701 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:47.701 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:47.701 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.701 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.701 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.961 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:10:47.961 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:10:47.961 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:47.961 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:47.961 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:47.961 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:47.961 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.961 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.961 [2024-12-05 13:42:30.301789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.961 [2024-12-05 13:42:30.301840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.961 [2024-12-05 13:42:30.301849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.961 [2024-12-05 13:42:30.301857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.961 [2024-12-05 13:42:30.301863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.961 [2024-12-05 13:42:30.301869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.961 [2024-12-05 13:42:30.301876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.301995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ee090 is same with the state(6) to be set 00:10:47.962 [2024-12-05 13:42:30.302284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-05 13:42:30.302336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-05 13:42:30.302352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-05 13:42:30.302374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-05 13:42:30.302390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-05 13:42:30.302406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-05 13:42:30.302421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-05 13:42:30.302436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-05 13:42:30.302451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-05 13:42:30.302466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-05 13:42:30.302480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-05 13:42:30.302495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-05 13:42:30.302509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.962 [2024-12-05 13:42:30.302527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.962 [2024-12-05 13:42:30.302534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.302988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.302996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.303003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.303010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.303018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.303026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.303033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.303041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.303048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.303056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.303062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.303076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.303083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.303092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.303098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.303106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.963 [2024-12-05 13:42:30.303113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.963 [2024-12-05 13:42:30.303121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.964 [2024-12-05 13:42:30.303127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.964 [2024-12-05 13:42:30.303135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.964 [2024-12-05 13:42:30.303142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.964 [2024-12-05 13:42:30.303149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.964 [2024-12-05 13:42:30.303156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.964 [2024-12-05 13:42:30.303164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.964 [2024-12-05 13:42:30.303170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.964 [2024-12-05 13:42:30.303178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.964 [2024-12-05 13:42:30.303184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.964 [2024-12-05 13:42:30.303192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.964 [2024-12-05 13:42:30.303199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.964 [2024-12-05 13:42:30.303207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.964 [2024-12-05 13:42:30.303213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.964 [2024-12-05 13:42:30.303221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.964 [2024-12-05 13:42:30.303227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.964 [2024-12-05 13:42:30.303235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.964 [2024-12-05 13:42:30.303241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.964 [2024-12-05 13:42:30.303249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.964 [2024-12-05 13:42:30.303258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.964 [2024-12-05 13:42:30.303267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:47.964 [2024-12-05 13:42:30.303273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.964 [2024-12-05 13:42:30.303280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdff430 is same with the state(6) to be set 00:10:47.964 [2024-12-05 13:42:30.304239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:47.964 task offset: 98304 on job bdev=Nvme0n1 fails 00:10:47.964 00:10:47.964 Latency(us) 00:10:47.964 [2024-12-05T12:42:30.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.964 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:47.964 Job: Nvme0n1 ended in about 0.41 seconds with error 00:10:47.964 Verification LBA range: start 0x0 length 0x400 00:10:47.964 Nvme0n1 : 0.41 1889.72 118.11 157.48 0.00 30435.37 3557.67 26838.55 00:10:47.964 [2024-12-05T12:42:30.551Z] =================================================================================================================== 00:10:47.964 [2024-12-05T12:42:30.551Z] Total : 1889.72 118.11 157.48 0.00 30435.37 3557.67 26838.55 00:10:47.964 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.964 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:47.964 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.964 [2024-12-05 13:42:30.306612] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:47.964 [2024-12-05 13:42:30.306634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe6510 (9): Bad file descriptor 00:10:47.964 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:47.964 [2024-12-05 13:42:30.310748] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:10:47.964 [2024-12-05 13:42:30.310818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:10:47.964 [2024-12-05 13:42:30.310843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.964 [2024-12-05 13:42:30.310857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:10:47.964 [2024-12-05 13:42:30.310864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:10:47.964 [2024-12-05 13:42:30.310871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:10:47.964 [2024-12-05 13:42:30.310877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe6510 00:10:47.964 [2024-12-05 13:42:30.310895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe6510 (9): Bad file descriptor 00:10:47.964 [2024-12-05 13:42:30.310907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:10:47.964 [2024-12-05 13:42:30.310914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:10:47.964 [2024-12-05 13:42:30.310922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:10:47.964 [2024-12-05 13:42:30.310930] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:10:47.964 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.964 13:42:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:48.898 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 509326 00:10:48.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (509326) - No such process 00:10:48.898 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:48.898 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:48.898 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:48.898 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:48.898 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:48.898 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:48.898 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:48.898 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:48.898 { 00:10:48.898 "params": { 00:10:48.898 "name": "Nvme$subsystem", 00:10:48.898 "trtype": "$TEST_TRANSPORT", 00:10:48.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:48.898 "adrfam": "ipv4", 00:10:48.898 "trsvcid": "$NVMF_PORT", 00:10:48.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:48.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:48.898 "hdgst": ${hdgst:-false}, 00:10:48.898 "ddgst": ${ddgst:-false} 00:10:48.898 }, 00:10:48.898 "method": "bdev_nvme_attach_controller" 00:10:48.898 } 00:10:48.898 EOF 00:10:48.898 )") 00:10:48.898 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:48.898 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:48.898 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:48.898 13:42:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:48.898 "params": { 00:10:48.898 "name": "Nvme0", 00:10:48.898 "trtype": "tcp", 00:10:48.898 "traddr": "10.0.0.2", 00:10:48.898 "adrfam": "ipv4", 00:10:48.898 "trsvcid": "4420", 00:10:48.898 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:48.898 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:48.898 "hdgst": false, 00:10:48.898 "ddgst": false 00:10:48.898 }, 00:10:48.898 "method": "bdev_nvme_attach_controller" 00:10:48.898 }' 00:10:48.898 [2024-12-05 13:42:31.374919] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:10:48.898 [2024-12-05 13:42:31.374966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509580 ] 00:10:48.898 [2024-12-05 13:42:31.452487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.157 [2024-12-05 13:42:31.491622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.415 Running I/O for 1 seconds... 00:10:50.346 2048.00 IOPS, 128.00 MiB/s 00:10:50.346 Latency(us) 00:10:50.346 [2024-12-05T12:42:32.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.346 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:50.346 Verification LBA range: start 0x0 length 0x400 00:10:50.346 Nvme0n1 : 1.02 2064.79 129.05 0.00 0.00 30515.45 4681.14 26838.55 00:10:50.346 [2024-12-05T12:42:32.933Z] =================================================================================================================== 00:10:50.346 [2024-12-05T12:42:32.933Z] Total : 2064.79 129.05 0.00 0.00 30515.45 4681.14 26838.55 00:10:50.604 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:50.604 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:50.604 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:50.604 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:50.604 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:50.604 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.604 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:50.604 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.604 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:50.604 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.604 13:42:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.604 rmmod nvme_tcp 00:10:50.604 rmmod nvme_fabrics 00:10:50.604 rmmod nvme_keyring 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 509281 ']' 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 509281 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 509281 ']' 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 509281 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 509281 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 509281' 00:10:50.604 killing process with pid 509281 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 509281 00:10:50.604 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 509281 00:10:50.862 [2024-12-05 13:42:33.236399] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:50.862 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.862 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.862 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.862 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:50.862 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:50.862 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.862 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.862 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.862 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.862 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.862 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.862 13:42:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.877 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:52.877 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:52.877 00:10:52.877 real 0m12.572s 00:10:52.877 user 0m20.138s 00:10:52.877 sys 0m5.635s 00:10:52.877 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.877 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.877 ************************************ 00:10:52.877 END TEST nvmf_host_management 00:10:52.877 ************************************ 00:10:52.877 13:42:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:52.877 13:42:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.877 13:42:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.877 13:42:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:52.877 ************************************ 00:10:52.877 START TEST nvmf_lvol 00:10:52.877 ************************************ 00:10:52.877 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:53.168 * Looking for test storage... 00:10:53.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:53.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.168 --rc genhtml_branch_coverage=1 00:10:53.168 --rc genhtml_function_coverage=1 00:10:53.168 --rc genhtml_legend=1 00:10:53.168 --rc geninfo_all_blocks=1 00:10:53.168 --rc geninfo_unexecuted_blocks=1 00:10:53.168 00:10:53.168 ' 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:53.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.168 --rc genhtml_branch_coverage=1 00:10:53.168 --rc genhtml_function_coverage=1 00:10:53.168 --rc genhtml_legend=1 00:10:53.168 --rc geninfo_all_blocks=1 00:10:53.168 --rc geninfo_unexecuted_blocks=1 00:10:53.168 00:10:53.168 ' 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:53.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.168 --rc genhtml_branch_coverage=1 00:10:53.168 --rc genhtml_function_coverage=1 00:10:53.168 --rc genhtml_legend=1 00:10:53.168 --rc geninfo_all_blocks=1 00:10:53.168 --rc geninfo_unexecuted_blocks=1 00:10:53.168 00:10:53.168 ' 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:53.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.168 --rc genhtml_branch_coverage=1 00:10:53.168 --rc genhtml_function_coverage=1 00:10:53.168 --rc genhtml_legend=1 00:10:53.168 --rc geninfo_all_blocks=1 00:10:53.168 --rc geninfo_unexecuted_blocks=1 00:10:53.168 00:10:53.168 ' 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.168 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:10:53.169 13:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:59.738 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:59.738 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:59.738 Found net devices under 0000:86:00.0: cvl_0_0 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:59.738 Found net devices under 0000:86:00.1: cvl_0_1 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:59.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:10:59.738 00:10:59.738 --- 10.0.0.2 ping statistics --- 00:10:59.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.738 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:10:59.738 00:10:59.738 --- 10.0.0.1 ping statistics --- 00:10:59.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.738 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.738 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=513572 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 513572 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 513572 ']' 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:59.739 [2024-12-05 13:42:41.646740] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:10:59.739 [2024-12-05 13:42:41.646779] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.739 [2024-12-05 13:42:41.724338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:59.739 [2024-12-05 13:42:41.763230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.739 [2024-12-05 13:42:41.763267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.739 [2024-12-05 13:42:41.763273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.739 [2024-12-05 13:42:41.763279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.739 [2024-12-05 13:42:41.763285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.739 [2024-12-05 13:42:41.764695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.739 [2024-12-05 13:42:41.764801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.739 [2024-12-05 13:42:41.764801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.739 13:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:59.739 [2024-12-05 13:42:42.074408] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.739 13:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.997 13:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:59.997 13:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.997 13:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:59.997 13:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:00.255 13:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:00.513 13:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=791b7b4b-457b-4819-aa24-ab666cdb3726 00:11:00.513 13:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 791b7b4b-457b-4819-aa24-ab666cdb3726 lvol 20 00:11:00.772 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c7605a5d-54b8-4d72-a4a7-c02efc568e8c 00:11:00.772 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:00.772 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c7605a5d-54b8-4d72-a4a7-c02efc568e8c 00:11:01.029 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:01.288 [2024-12-05 13:42:43.692625] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.288 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:01.546 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=513845 00:11:01.546 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:01.546 13:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:02.481 13:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c7605a5d-54b8-4d72-a4a7-c02efc568e8c MY_SNAPSHOT 00:11:02.739 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=aeddcc7f-1079-464a-a81a-4682e3fa17ac 00:11:02.739 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c7605a5d-54b8-4d72-a4a7-c02efc568e8c 30 00:11:02.998 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone aeddcc7f-1079-464a-a81a-4682e3fa17ac MY_CLONE 00:11:03.256 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f6289aa5-4ccc-4d4f-9a1d-77603341b8bb 00:11:03.256 13:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f6289aa5-4ccc-4d4f-9a1d-77603341b8bb 00:11:03.823 13:42:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 513845 00:11:11.935 Initializing NVMe Controllers 00:11:11.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:11.935 Controller IO queue size 128, less than required. 00:11:11.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:11.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:11.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:11.935 Initialization complete. Launching workers. 00:11:11.935 ======================================================== 00:11:11.935 Latency(us) 00:11:11.935 Device Information : IOPS MiB/s Average min max 00:11:11.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11971.21 46.76 10691.92 1504.37 62404.03 00:11:11.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11892.61 46.46 10767.11 3417.23 48989.83 00:11:11.935 ======================================================== 00:11:11.935 Total : 23863.82 93.22 10729.39 1504.37 62404.03 00:11:11.935 00:11:11.935 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:11.935 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c7605a5d-54b8-4d72-a4a7-c02efc568e8c 00:11:12.191 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 791b7b4b-457b-4819-aa24-ab666cdb3726 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.449 rmmod nvme_tcp 00:11:12.449 rmmod nvme_fabrics 00:11:12.449 rmmod nvme_keyring 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 513572 ']' 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 513572 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 513572 ']' 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 513572 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 513572 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 513572' 00:11:12.449 killing process with pid 513572 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 513572 00:11:12.449 13:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 513572 00:11:12.708 13:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.708 13:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:12.708 13:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:12.708 13:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:12.708 13:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:11:12.708 13:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:12.708 13:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:11:12.708 13:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.708 13:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:12.708 13:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.708 13:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.708 13:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:15.242 00:11:15.242 real 0m21.857s 00:11:15.242 user 1m2.737s 00:11:15.242 sys 0m7.639s 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:15.242 ************************************ 00:11:15.242 END TEST nvmf_lvol 00:11:15.242 ************************************ 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:15.242 ************************************ 00:11:15.242 START TEST nvmf_lvs_grow 00:11:15.242 ************************************ 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:15.242 * Looking for test storage... 00:11:15.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.242 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:15.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.243 --rc genhtml_branch_coverage=1 00:11:15.243 --rc genhtml_function_coverage=1 00:11:15.243 --rc genhtml_legend=1 00:11:15.243 --rc geninfo_all_blocks=1 00:11:15.243 --rc geninfo_unexecuted_blocks=1 00:11:15.243 00:11:15.243 ' 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:15.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.243 --rc genhtml_branch_coverage=1 00:11:15.243 --rc genhtml_function_coverage=1 00:11:15.243 --rc genhtml_legend=1 00:11:15.243 --rc geninfo_all_blocks=1 00:11:15.243 --rc geninfo_unexecuted_blocks=1 00:11:15.243 00:11:15.243 ' 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:15.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.243 --rc genhtml_branch_coverage=1 00:11:15.243 --rc genhtml_function_coverage=1 00:11:15.243 --rc genhtml_legend=1 00:11:15.243 --rc geninfo_all_blocks=1 00:11:15.243 --rc geninfo_unexecuted_blocks=1 00:11:15.243 00:11:15.243 ' 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:15.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.243 --rc genhtml_branch_coverage=1 00:11:15.243 --rc genhtml_function_coverage=1 00:11:15.243 --rc genhtml_legend=1 00:11:15.243 --rc geninfo_all_blocks=1 00:11:15.243 --rc geninfo_unexecuted_blocks=1 00:11:15.243 00:11:15.243 ' 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:15.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:11:15.243 13:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:21.817 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.817 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:21.817 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:21.818 Found net devices under 0000:86:00.0: cvl_0_0 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:21.818 Found net devices under 0000:86:00.1: cvl_0_1 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:11:21.818 00:11:21.818 --- 10.0.0.2 ping statistics --- 00:11:21.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.818 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:11:21.818 00:11:21.818 --- 10.0.0.1 ping statistics --- 00:11:21.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.818 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=519303 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 519303 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 519303 ']' 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:21.818 [2024-12-05 13:43:03.631763] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:11:21.818 [2024-12-05 13:43:03.631812] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.818 [2024-12-05 13:43:03.708658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.818 [2024-12-05 13:43:03.751770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.818 [2024-12-05 13:43:03.751807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.818 [2024-12-05 13:43:03.751815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.818 [2024-12-05 13:43:03.751821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.818 [2024-12-05 13:43:03.751826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.818 [2024-12-05 13:43:03.752396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:11:21.818 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.819 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.819 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:21.819 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.819 13:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:21.819 [2024-12-05 13:43:04.062938] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:21.819 ************************************ 00:11:21.819 START TEST lvs_grow_clean 00:11:21.819 ************************************ 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:21.819 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:22.078 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0 00:11:22.078 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0 00:11:22.078 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:22.337 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:22.337 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:22.337 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0 lvol 150 00:11:22.596 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d453e4aa-1f73-4689-8680-5f10a476851d 00:11:22.596 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:22.596 13:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:22.596 [2024-12-05 13:43:05.086794] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:22.596 [2024-12-05 13:43:05.086844] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:22.596 true 00:11:22.596 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0 00:11:22.596 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:22.854 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:22.854 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:23.112 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d453e4aa-1f73-4689-8680-5f10a476851d 00:11:23.112 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:23.370 [2024-12-05 13:43:05.800964] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.370 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:23.629 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=519741 00:11:23.629 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:23.629 13:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:23.629 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 519741 /var/tmp/bdevperf.sock 00:11:23.629 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 519741 ']' 00:11:23.629 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:23.629 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.629 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:23.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:23.629 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.629 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:23.629 [2024-12-05 13:43:06.043344] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:11:23.629 [2024-12-05 13:43:06.043409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519741 ] 00:11:23.629 [2024-12-05 13:43:06.116056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.629 [2024-12-05 13:43:06.155872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.889 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.889 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:11:23.889 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:24.148 Nvme0n1 00:11:24.148 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:24.148 [ 00:11:24.148 { 00:11:24.148 "name": "Nvme0n1", 00:11:24.148 "aliases": [ 00:11:24.148 "d453e4aa-1f73-4689-8680-5f10a476851d" 00:11:24.148 ], 00:11:24.148 "product_name": "NVMe disk", 00:11:24.148 "block_size": 4096, 00:11:24.148 "num_blocks": 38912, 00:11:24.148 "uuid": "d453e4aa-1f73-4689-8680-5f10a476851d", 00:11:24.148 "numa_id": 1, 00:11:24.148 "assigned_rate_limits": { 00:11:24.148 "rw_ios_per_sec": 0, 00:11:24.148 "rw_mbytes_per_sec": 0, 00:11:24.148 "r_mbytes_per_sec": 0, 00:11:24.148 "w_mbytes_per_sec": 0 00:11:24.148 }, 00:11:24.148 "claimed": false, 00:11:24.148 "zoned": false, 00:11:24.148 "supported_io_types": { 00:11:24.148 "read": true, 00:11:24.148 "write": true, 00:11:24.148 "unmap": true, 00:11:24.148 "flush": true, 00:11:24.148 "reset": true, 00:11:24.148 "nvme_admin": true, 00:11:24.148 "nvme_io": true, 00:11:24.148 "nvme_io_md": false, 00:11:24.148 "write_zeroes": true, 00:11:24.148 "zcopy": false, 00:11:24.148 "get_zone_info": false, 00:11:24.148 "zone_management": false, 00:11:24.148 "zone_append": false, 00:11:24.148 "compare": true, 00:11:24.148 "compare_and_write": true, 00:11:24.148 "abort": true, 00:11:24.148 "seek_hole": false, 00:11:24.148 "seek_data": false, 00:11:24.148 "copy": true, 00:11:24.148 "nvme_iov_md": false 00:11:24.148 }, 00:11:24.148 "memory_domains": [ 00:11:24.148 { 00:11:24.148 "dma_device_id": "system", 00:11:24.148 "dma_device_type": 1 00:11:24.148 } 00:11:24.148 ], 00:11:24.148 "driver_specific": { 00:11:24.148 "nvme": [ 00:11:24.148 { 00:11:24.148 "trid": { 00:11:24.148 "trtype": "TCP", 00:11:24.148 "adrfam": "IPv4", 00:11:24.148 "traddr": "10.0.0.2", 00:11:24.148 "trsvcid": "4420", 00:11:24.148 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:24.148 }, 00:11:24.148 "ctrlr_data": { 00:11:24.148 "cntlid": 1, 00:11:24.148 "vendor_id": "0x8086", 00:11:24.148 "model_number": "SPDK bdev Controller", 00:11:24.148 "serial_number": "SPDK0", 00:11:24.148 "firmware_revision": "25.01", 00:11:24.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:24.148 "oacs": { 00:11:24.148 "security": 0, 00:11:24.148 "format": 0, 00:11:24.148 "firmware": 0, 00:11:24.148 "ns_manage": 0 00:11:24.148 }, 00:11:24.148 "multi_ctrlr": true, 00:11:24.148 "ana_reporting": false 00:11:24.148 }, 00:11:24.148 "vs": { 00:11:24.148 "nvme_version": "1.3" 00:11:24.148 }, 00:11:24.148 "ns_data": { 00:11:24.148 "id": 1, 00:11:24.148 "can_share": true 00:11:24.148 } 00:11:24.148 } 00:11:24.148 ], 00:11:24.148 "mp_policy": "active_passive" 00:11:24.148 } 00:11:24.148 } 00:11:24.148 ] 00:11:24.148 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=519963 00:11:24.148 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:24.148 13:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:24.407 Running I/O for 10 seconds... 00:11:25.344 Latency(us) 00:11:25.344 [2024-12-05T12:43:07.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.344 Nvme0n1 : 1.00 23764.00 92.83 0.00 0.00 0.00 0.00 0.00 00:11:25.344 [2024-12-05T12:43:07.931Z] =================================================================================================================== 00:11:25.344 [2024-12-05T12:43:07.931Z] Total : 23764.00 92.83 0.00 0.00 0.00 0.00 0.00 00:11:25.344 00:11:26.280 13:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0 00:11:26.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:26.280 Nvme0n1 : 2.00 23865.00 93.22 0.00 0.00 0.00 0.00 0.00 00:11:26.280 [2024-12-05T12:43:08.867Z] =================================================================================================================== 00:11:26.280 [2024-12-05T12:43:08.867Z] Total : 23865.00 93.22 0.00 0.00 0.00 0.00 0.00 00:11:26.280 00:11:26.538 true 00:11:26.538 13:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0 00:11:26.538 13:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:26.538 13:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:26.538 13:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:26.538 13:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 519963 00:11:27.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:27.474 Nvme0n1 : 3.00 23902.00 93.37 0.00 0.00 0.00 0.00 0.00 00:11:27.474 [2024-12-05T12:43:10.061Z] =================================================================================================================== 00:11:27.474 [2024-12-05T12:43:10.061Z] Total : 23902.00 93.37 0.00 0.00 0.00 0.00 0.00 00:11:27.474 00:11:28.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:28.410 Nvme0n1 : 4.00 23800.25 92.97 0.00 0.00 0.00 0.00 0.00 00:11:28.410 [2024-12-05T12:43:10.997Z] =================================================================================================================== 00:11:28.410 [2024-12-05T12:43:10.997Z] Total : 23800.25 92.97 0.00 0.00 0.00 0.00 0.00 00:11:28.410 00:11:29.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:29.343 Nvme0n1 : 5.00 23838.60 93.12 0.00 0.00 0.00 0.00 0.00 00:11:29.343 [2024-12-05T12:43:11.930Z] =================================================================================================================== 00:11:29.343 [2024-12-05T12:43:11.930Z] Total : 23838.60 93.12 0.00 0.00 0.00 0.00 0.00 00:11:29.343 00:11:30.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.278 Nvme0n1 : 6.00 23894.83 93.34 0.00 0.00 0.00 0.00 0.00 00:11:30.278 [2024-12-05T12:43:12.865Z] =================================================================================================================== 00:11:30.278 [2024-12-05T12:43:12.865Z] Total : 23894.83 93.34 0.00 0.00 0.00 0.00 0.00 00:11:30.278 00:11:31.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.214 Nvme0n1 : 7.00 23926.71 93.46 0.00 0.00 0.00 0.00 0.00 00:11:31.214 [2024-12-05T12:43:13.801Z] =================================================================================================================== 00:11:31.214 [2024-12-05T12:43:13.801Z] Total : 23926.71 93.46 0.00 0.00 0.00 0.00 0.00 00:11:31.214 00:11:32.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.592 Nvme0n1 : 8.00 23964.00 93.61 0.00 0.00 0.00 0.00 0.00 00:11:32.592 [2024-12-05T12:43:15.179Z] =================================================================================================================== 00:11:32.592 [2024-12-05T12:43:15.179Z] Total : 23964.00 93.61 0.00 0.00 0.00 0.00 0.00 00:11:32.592 00:11:33.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.528 Nvme0n1 : 9.00 24003.67 93.76 0.00 0.00 0.00 0.00 0.00 00:11:33.528 [2024-12-05T12:43:16.115Z] =================================================================================================================== 00:11:33.528 [2024-12-05T12:43:16.115Z] Total : 24003.67 93.76 0.00 0.00 0.00 0.00 0.00 00:11:33.528 00:11:34.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.465 Nvme0n1 : 10.00 24019.00 93.82 0.00 0.00 0.00 0.00 0.00 00:11:34.465 [2024-12-05T12:43:17.052Z] =================================================================================================================== 00:11:34.465 [2024-12-05T12:43:17.052Z] Total : 24019.00 93.82 0.00 0.00 0.00 0.00 0.00 00:11:34.465 00:11:34.465 00:11:34.465 Latency(us) 00:11:34.465 [2024-12-05T12:43:17.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.465 Nvme0n1 : 10.00 24022.64 93.84 0.00 0.00 5325.41 1443.35 10423.34 00:11:34.465 [2024-12-05T12:43:17.052Z] =================================================================================================================== 00:11:34.465 [2024-12-05T12:43:17.052Z] Total : 24022.64 93.84 0.00 0.00 5325.41 1443.35 10423.34 00:11:34.465 { 00:11:34.465 "results": [ 00:11:34.465 { 00:11:34.465 "job": "Nvme0n1", 00:11:34.465 "core_mask": "0x2", 00:11:34.465 "workload": "randwrite", 00:11:34.465 "status": "finished", 00:11:34.465 "queue_depth": 128, 00:11:34.465 "io_size": 4096, 00:11:34.465 "runtime": 10.003812, 00:11:34.465 "iops": 24022.64256865283, 00:11:34.465 "mibps": 93.83844753380012, 00:11:34.465 "io_failed": 0, 00:11:34.465 "io_timeout": 0, 00:11:34.465 "avg_latency_us": 5325.40769461416, 00:11:34.465 "min_latency_us": 1443.352380952381, 00:11:34.465 "max_latency_us": 10423.344761904762 00:11:34.465 } 00:11:34.465 ], 00:11:34.465 "core_count": 1 00:11:34.465 } 00:11:34.465 13:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 519741 00:11:34.465 13:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 519741 ']' 00:11:34.465 13:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 519741 00:11:34.465 13:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:11:34.465 13:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.465 13:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 519741 00:11:34.465 13:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:34.465 13:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:34.465 13:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 519741' 00:11:34.465 killing process with pid 519741 00:11:34.465 13:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 519741 00:11:34.465 Received shutdown signal, test time was about 10.000000 seconds 00:11:34.465 00:11:34.465 Latency(us) 00:11:34.465 [2024-12-05T12:43:17.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.465 [2024-12-05T12:43:17.052Z] =================================================================================================================== 00:11:34.465 [2024-12-05T12:43:17.052Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:34.465 13:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 519741 00:11:34.465 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:34.729 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:34.987 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0 00:11:34.987 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:35.245 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:35.245 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:35.245 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:35.503 [2024-12-05 13:43:17.839116] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:35.503 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0 00:11:35.503 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:11:35.503 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0 00:11:35.503 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.503 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.503 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.503 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.503 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.503 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.503 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.503 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:35.503 13:43:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0 00:11:35.503 request: 00:11:35.503 { 00:11:35.503 "uuid": "d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0", 00:11:35.503 "method": "bdev_lvol_get_lvstores", 00:11:35.503 "req_id": 1 00:11:35.503 } 00:11:35.503 Got JSON-RPC error response 00:11:35.503 response: 00:11:35.503 { 00:11:35.503 "code": -19, 00:11:35.503 "message": "No such device" 00:11:35.503 } 00:11:35.503 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:11:35.503 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:35.503 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:35.503 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:35.503 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:35.762 aio_bdev 00:11:35.762 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d453e4aa-1f73-4689-8680-5f10a476851d 00:11:35.762 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d453e4aa-1f73-4689-8680-5f10a476851d 00:11:35.762 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.762 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:11:35.762 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.762 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.762 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:36.020 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d453e4aa-1f73-4689-8680-5f10a476851d -t 2000 00:11:36.020 [ 00:11:36.020 { 00:11:36.020 "name": "d453e4aa-1f73-4689-8680-5f10a476851d", 00:11:36.020 "aliases": [ 00:11:36.020 "lvs/lvol" 00:11:36.020 ], 00:11:36.020 "product_name": "Logical Volume", 00:11:36.020 "block_size": 4096, 00:11:36.020 "num_blocks": 38912, 00:11:36.020 "uuid": "d453e4aa-1f73-4689-8680-5f10a476851d", 00:11:36.020 "assigned_rate_limits": { 00:11:36.020 "rw_ios_per_sec": 0, 00:11:36.020 "rw_mbytes_per_sec": 0, 00:11:36.020 "r_mbytes_per_sec": 0, 00:11:36.020 "w_mbytes_per_sec": 0 00:11:36.020 }, 00:11:36.020 "claimed": false, 00:11:36.020 "zoned": false, 00:11:36.020 "supported_io_types": { 00:11:36.020 "read": true, 00:11:36.020 "write": true, 00:11:36.020 "unmap": true, 00:11:36.020 "flush": false, 00:11:36.020 "reset": true, 00:11:36.020 "nvme_admin": false, 00:11:36.020 "nvme_io": false, 00:11:36.020 "nvme_io_md": false, 00:11:36.020 "write_zeroes": true, 00:11:36.020 "zcopy": false, 00:11:36.020 "get_zone_info": false, 00:11:36.020 "zone_management": false, 00:11:36.020 "zone_append": false, 00:11:36.020 "compare": false, 00:11:36.020 "compare_and_write": false, 00:11:36.020 "abort": false, 00:11:36.020 "seek_hole": true, 00:11:36.020 "seek_data": true, 00:11:36.020 "copy": false, 00:11:36.020 "nvme_iov_md": false 00:11:36.020 }, 00:11:36.020 "driver_specific": { 00:11:36.020 "lvol": { 00:11:36.020 "lvol_store_uuid": "d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0", 00:11:36.020 "base_bdev": "aio_bdev", 00:11:36.020 "thin_provision": false, 00:11:36.020 "num_allocated_clusters": 38, 00:11:36.020 "snapshot": false, 00:11:36.021 "clone": false, 00:11:36.021 "esnap_clone": false 00:11:36.021 } 00:11:36.021 } 00:11:36.021 } 00:11:36.021 ] 00:11:36.279 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:11:36.279 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0 00:11:36.279 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:36.279 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:36.279 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0 00:11:36.279 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:36.538 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:36.538 13:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d453e4aa-1f73-4689-8680-5f10a476851d 00:11:36.797 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d8fbe5b5-f863-47d8-a6f2-1635d2bc54f0 00:11:36.797 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:37.057 00:11:37.057 real 0m15.430s 00:11:37.057 user 0m14.971s 00:11:37.057 sys 0m1.494s 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:37.057 ************************************ 00:11:37.057 END TEST lvs_grow_clean 00:11:37.057 ************************************ 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:37.057 ************************************ 00:11:37.057 START TEST lvs_grow_dirty 00:11:37.057 ************************************ 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:37.057 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:37.316 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:37.316 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:37.316 13:43:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:37.575 13:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:37.575 13:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:37.575 13:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:37.845 13:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:37.845 13:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:37.845 13:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8a877a82-bb7e-476a-967b-31f980b4c0de lvol 150 00:11:38.103 13:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=99445fcd-2134-4ce8-b517-976c0a9f80db 00:11:38.103 13:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:38.103 13:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:38.103 [2024-12-05 13:43:20.634414] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:38.103 [2024-12-05 13:43:20.634466] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:38.103 true 00:11:38.103 13:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:38.103 13:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:38.362 13:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:38.362 13:43:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:38.621 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 99445fcd-2134-4ce8-b517-976c0a9f80db 00:11:38.879 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:38.879 [2024-12-05 13:43:21.388683] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.879 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:39.138 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=522417 00:11:39.138 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:39.138 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:39.138 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 522417 /var/tmp/bdevperf.sock 00:11:39.138 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 522417 ']' 00:11:39.138 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:39.138 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.138 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:39.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:39.138 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.138 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:39.138 [2024-12-05 13:43:21.614213] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:11:39.138 [2024-12-05 13:43:21.614258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522417 ] 00:11:39.138 [2024-12-05 13:43:21.689355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.396 [2024-12-05 13:43:21.732720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.396 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.396 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:39.396 13:43:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:39.654 Nvme0n1 00:11:39.654 13:43:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:39.912 [ 00:11:39.912 { 00:11:39.912 "name": "Nvme0n1", 00:11:39.912 "aliases": [ 00:11:39.912 "99445fcd-2134-4ce8-b517-976c0a9f80db" 00:11:39.912 ], 00:11:39.912 "product_name": "NVMe disk", 00:11:39.912 "block_size": 4096, 00:11:39.912 "num_blocks": 38912, 00:11:39.912 "uuid": "99445fcd-2134-4ce8-b517-976c0a9f80db", 00:11:39.912 "numa_id": 1, 00:11:39.912 "assigned_rate_limits": { 00:11:39.912 "rw_ios_per_sec": 0, 00:11:39.912 "rw_mbytes_per_sec": 0, 00:11:39.912 "r_mbytes_per_sec": 0, 00:11:39.912 "w_mbytes_per_sec": 0 00:11:39.912 }, 00:11:39.912 "claimed": false, 00:11:39.912 "zoned": false, 00:11:39.912 "supported_io_types": { 00:11:39.912 "read": true, 00:11:39.912 "write": true, 00:11:39.912 "unmap": true, 00:11:39.912 "flush": true, 00:11:39.912 "reset": true, 00:11:39.912 "nvme_admin": true, 00:11:39.912 "nvme_io": true, 00:11:39.912 "nvme_io_md": false, 00:11:39.912 "write_zeroes": true, 00:11:39.912 "zcopy": false, 00:11:39.912 "get_zone_info": false, 00:11:39.912 "zone_management": false, 00:11:39.912 "zone_append": false, 00:11:39.912 "compare": true, 00:11:39.912 "compare_and_write": true, 00:11:39.912 "abort": true, 00:11:39.912 "seek_hole": false, 00:11:39.912 "seek_data": false, 00:11:39.912 "copy": true, 00:11:39.912 "nvme_iov_md": false 00:11:39.912 }, 00:11:39.912 "memory_domains": [ 00:11:39.912 { 00:11:39.912 "dma_device_id": "system", 00:11:39.912 "dma_device_type": 1 00:11:39.912 } 00:11:39.912 ], 00:11:39.912 "driver_specific": { 00:11:39.912 "nvme": [ 00:11:39.912 { 00:11:39.912 "trid": { 00:11:39.913 "trtype": "TCP", 00:11:39.913 "adrfam": "IPv4", 00:11:39.913 "traddr": "10.0.0.2", 00:11:39.913 "trsvcid": "4420", 00:11:39.913 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:39.913 }, 00:11:39.913 "ctrlr_data": { 00:11:39.913 "cntlid": 1, 00:11:39.913 "vendor_id": "0x8086", 00:11:39.913 "model_number": "SPDK bdev Controller", 00:11:39.913 "serial_number": "SPDK0", 00:11:39.913 "firmware_revision": "25.01", 00:11:39.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:39.913 "oacs": { 00:11:39.913 "security": 0, 00:11:39.913 "format": 0, 00:11:39.913 "firmware": 0, 00:11:39.913 "ns_manage": 0 00:11:39.913 }, 00:11:39.913 "multi_ctrlr": true, 00:11:39.913 "ana_reporting": false 00:11:39.913 }, 00:11:39.913 "vs": { 00:11:39.913 "nvme_version": "1.3" 00:11:39.913 }, 00:11:39.913 "ns_data": { 00:11:39.913 "id": 1, 00:11:39.913 "can_share": true 00:11:39.913 } 00:11:39.913 } 00:11:39.913 ], 00:11:39.913 "mp_policy": "active_passive" 00:11:39.913 } 00:11:39.913 } 00:11:39.913 ] 00:11:39.913 13:43:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:39.913 13:43:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=522574 00:11:39.913 13:43:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:39.913 Running I/O for 10 seconds... 00:11:40.848 Latency(us) 00:11:40.848 [2024-12-05T12:43:23.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.848 Nvme0n1 : 1.00 23645.00 92.36 0.00 0.00 0.00 0.00 0.00 00:11:40.848 [2024-12-05T12:43:23.435Z] =================================================================================================================== 00:11:40.848 [2024-12-05T12:43:23.435Z] Total : 23645.00 92.36 0.00 0.00 0.00 0.00 0.00 00:11:40.848 00:11:41.784 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:42.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.044 Nvme0n1 : 2.00 23782.00 92.90 0.00 0.00 0.00 0.00 0.00 00:11:42.044 [2024-12-05T12:43:24.631Z] =================================================================================================================== 00:11:42.044 [2024-12-05T12:43:24.631Z] Total : 23782.00 92.90 0.00 0.00 0.00 0.00 0.00 00:11:42.044 00:11:42.044 true 00:11:42.044 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:42.044 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:42.303 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:42.303 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:42.303 13:43:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 522574 00:11:42.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.869 Nvme0n1 : 3.00 23804.00 92.98 0.00 0.00 0.00 0.00 0.00 00:11:42.869 [2024-12-05T12:43:25.456Z] =================================================================================================================== 00:11:42.869 [2024-12-05T12:43:25.456Z] Total : 23804.00 92.98 0.00 0.00 0.00 0.00 0.00 00:11:42.869 00:11:44.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.243 Nvme0n1 : 4.00 23886.00 93.30 0.00 0.00 0.00 0.00 0.00 00:11:44.243 [2024-12-05T12:43:26.830Z] =================================================================================================================== 00:11:44.243 [2024-12-05T12:43:26.830Z] Total : 23886.00 93.30 0.00 0.00 0.00 0.00 0.00 00:11:44.243 00:11:45.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.175 Nvme0n1 : 5.00 23947.80 93.55 0.00 0.00 0.00 0.00 0.00 00:11:45.175 [2024-12-05T12:43:27.762Z] =================================================================================================================== 00:11:45.175 [2024-12-05T12:43:27.762Z] Total : 23947.80 93.55 0.00 0.00 0.00 0.00 0.00 00:11:45.175 00:11:46.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.121 Nvme0n1 : 6.00 23978.67 93.67 0.00 0.00 0.00 0.00 0.00 00:11:46.121 [2024-12-05T12:43:28.708Z] =================================================================================================================== 00:11:46.121 [2024-12-05T12:43:28.708Z] Total : 23978.67 93.67 0.00 0.00 0.00 0.00 0.00 00:11:46.121 00:11:47.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.101 Nvme0n1 : 7.00 23996.57 93.74 0.00 0.00 0.00 0.00 0.00 00:11:47.101 [2024-12-05T12:43:29.688Z] =================================================================================================================== 00:11:47.101 [2024-12-05T12:43:29.688Z] Total : 23996.57 93.74 0.00 0.00 0.00 0.00 0.00 00:11:47.101 00:11:48.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.034 Nvme0n1 : 8.00 23986.50 93.70 0.00 0.00 0.00 0.00 0.00 00:11:48.034 [2024-12-05T12:43:30.621Z] =================================================================================================================== 00:11:48.034 [2024-12-05T12:43:30.621Z] Total : 23986.50 93.70 0.00 0.00 0.00 0.00 0.00 00:11:48.034 00:11:48.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.969 Nvme0n1 : 9.00 24000.33 93.75 0.00 0.00 0.00 0.00 0.00 00:11:48.969 [2024-12-05T12:43:31.556Z] =================================================================================================================== 00:11:48.969 [2024-12-05T12:43:31.556Z] Total : 24000.33 93.75 0.00 0.00 0.00 0.00 0.00 00:11:48.969 00:11:49.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.905 Nvme0n1 : 10.00 24026.30 93.85 0.00 0.00 0.00 0.00 0.00 00:11:49.905 [2024-12-05T12:43:32.492Z] =================================================================================================================== 00:11:49.905 [2024-12-05T12:43:32.492Z] Total : 24026.30 93.85 0.00 0.00 0.00 0.00 0.00 00:11:49.905 00:11:49.905 00:11:49.905 Latency(us) 00:11:49.905 [2024-12-05T12:43:32.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.905 Nvme0n1 : 10.00 24028.92 93.86 0.00 0.00 5323.84 3120.76 12483.05 00:11:49.905 [2024-12-05T12:43:32.492Z] =================================================================================================================== 00:11:49.905 [2024-12-05T12:43:32.492Z] Total : 24028.92 93.86 0.00 0.00 5323.84 3120.76 12483.05 00:11:49.905 { 00:11:49.905 "results": [ 00:11:49.905 { 00:11:49.905 "job": "Nvme0n1", 00:11:49.905 "core_mask": "0x2", 00:11:49.905 "workload": "randwrite", 00:11:49.905 "status": "finished", 00:11:49.905 "queue_depth": 128, 00:11:49.905 "io_size": 4096, 00:11:49.905 "runtime": 10.004236, 00:11:49.905 "iops": 24028.9213489166, 00:11:49.905 "mibps": 93.86297401920547, 00:11:49.905 "io_failed": 0, 00:11:49.905 "io_timeout": 0, 00:11:49.905 "avg_latency_us": 5323.837139453957, 00:11:49.905 "min_latency_us": 3120.7619047619046, 00:11:49.905 "max_latency_us": 12483.047619047618 00:11:49.905 } 00:11:49.905 ], 00:11:49.905 "core_count": 1 00:11:49.905 } 00:11:49.905 13:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 522417 00:11:49.905 13:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 522417 ']' 00:11:49.905 13:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 522417 00:11:49.905 13:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:11:49.905 13:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.905 13:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 522417 00:11:50.165 13:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:50.165 13:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:50.165 13:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 522417' 00:11:50.165 killing process with pid 522417 00:11:50.165 13:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 522417 00:11:50.165 Received shutdown signal, test time was about 10.000000 seconds 00:11:50.165 00:11:50.165 Latency(us) 00:11:50.165 [2024-12-05T12:43:32.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.165 [2024-12-05T12:43:32.752Z] =================================================================================================================== 00:11:50.165 [2024-12-05T12:43:32.752Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:50.165 13:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 522417 00:11:50.165 13:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:50.423 13:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:50.682 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:50.682 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 519303 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 519303 00:11:50.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 519303 Killed "${NVMF_APP[@]}" "$@" 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=524428 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 524428 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 524428 ']' 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.941 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.942 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.942 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.942 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:50.942 [2024-12-05 13:43:33.362484] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:11:50.942 [2024-12-05 13:43:33.362530] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.942 [2024-12-05 13:43:33.442759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.942 [2024-12-05 13:43:33.482899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.942 [2024-12-05 13:43:33.482935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.942 [2024-12-05 13:43:33.482942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.942 [2024-12-05 13:43:33.482948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.942 [2024-12-05 13:43:33.482953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.942 [2024-12-05 13:43:33.483501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.226 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.226 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:51.226 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:51.226 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:51.226 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:51.226 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.226 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:51.226 [2024-12-05 13:43:33.778153] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:51.226 [2024-12-05 13:43:33.778265] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:51.226 [2024-12-05 13:43:33.778290] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:51.487 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:51.487 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 99445fcd-2134-4ce8-b517-976c0a9f80db 00:11:51.487 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=99445fcd-2134-4ce8-b517-976c0a9f80db 00:11:51.487 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.487 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:51.487 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.487 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.487 13:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:51.487 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 99445fcd-2134-4ce8-b517-976c0a9f80db -t 2000 00:11:51.745 [ 00:11:51.745 { 00:11:51.745 "name": "99445fcd-2134-4ce8-b517-976c0a9f80db", 00:11:51.745 "aliases": [ 00:11:51.745 "lvs/lvol" 00:11:51.745 ], 00:11:51.745 "product_name": "Logical Volume", 00:11:51.745 "block_size": 4096, 00:11:51.745 "num_blocks": 38912, 00:11:51.746 "uuid": "99445fcd-2134-4ce8-b517-976c0a9f80db", 00:11:51.746 "assigned_rate_limits": { 00:11:51.746 "rw_ios_per_sec": 0, 00:11:51.746 "rw_mbytes_per_sec": 0, 00:11:51.746 "r_mbytes_per_sec": 0, 00:11:51.746 "w_mbytes_per_sec": 0 00:11:51.746 }, 00:11:51.746 "claimed": false, 00:11:51.746 "zoned": false, 00:11:51.746 "supported_io_types": { 00:11:51.746 "read": true, 00:11:51.746 "write": true, 00:11:51.746 "unmap": true, 00:11:51.746 "flush": false, 00:11:51.746 "reset": true, 00:11:51.746 "nvme_admin": false, 00:11:51.746 "nvme_io": false, 00:11:51.746 "nvme_io_md": false, 00:11:51.746 "write_zeroes": true, 00:11:51.746 "zcopy": false, 00:11:51.746 "get_zone_info": false, 00:11:51.746 "zone_management": false, 00:11:51.746 "zone_append": false, 00:11:51.746 "compare": false, 00:11:51.746 "compare_and_write": false, 00:11:51.746 "abort": false, 00:11:51.746 "seek_hole": true, 00:11:51.746 "seek_data": true, 00:11:51.746 "copy": false, 00:11:51.746 "nvme_iov_md": false 00:11:51.746 }, 00:11:51.746 "driver_specific": { 00:11:51.746 "lvol": { 00:11:51.746 "lvol_store_uuid": "8a877a82-bb7e-476a-967b-31f980b4c0de", 00:11:51.746 "base_bdev": "aio_bdev", 00:11:51.746 "thin_provision": false, 00:11:51.746 "num_allocated_clusters": 38, 00:11:51.746 "snapshot": false, 00:11:51.746 "clone": false, 00:11:51.746 "esnap_clone": false 00:11:51.746 } 00:11:51.746 } 00:11:51.746 } 00:11:51.746 ] 00:11:51.746 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:51.746 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:51.746 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:52.004 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:52.004 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:52.004 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:52.004 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:52.004 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:52.263 [2024-12-05 13:43:34.731174] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:52.263 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:52.263 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:11:52.263 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:52.263 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.263 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.263 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.263 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.263 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.263 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.263 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.263 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:52.264 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:52.522 request: 00:11:52.522 { 00:11:52.522 "uuid": "8a877a82-bb7e-476a-967b-31f980b4c0de", 00:11:52.522 "method": "bdev_lvol_get_lvstores", 00:11:52.522 "req_id": 1 00:11:52.522 } 00:11:52.522 Got JSON-RPC error response 00:11:52.522 response: 00:11:52.522 { 00:11:52.522 "code": -19, 00:11:52.522 "message": "No such device" 00:11:52.522 } 00:11:52.522 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:11:52.522 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:52.522 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:52.523 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:52.523 13:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:52.780 aio_bdev 00:11:52.780 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 99445fcd-2134-4ce8-b517-976c0a9f80db 00:11:52.780 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=99445fcd-2134-4ce8-b517-976c0a9f80db 00:11:52.780 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.780 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:52.780 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.780 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.780 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:52.780 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 99445fcd-2134-4ce8-b517-976c0a9f80db -t 2000 00:11:53.038 [ 00:11:53.038 { 00:11:53.038 "name": "99445fcd-2134-4ce8-b517-976c0a9f80db", 00:11:53.038 "aliases": [ 00:11:53.038 "lvs/lvol" 00:11:53.038 ], 00:11:53.038 "product_name": "Logical Volume", 00:11:53.038 "block_size": 4096, 00:11:53.038 "num_blocks": 38912, 00:11:53.038 "uuid": "99445fcd-2134-4ce8-b517-976c0a9f80db", 00:11:53.038 "assigned_rate_limits": { 00:11:53.038 "rw_ios_per_sec": 0, 00:11:53.038 "rw_mbytes_per_sec": 0, 00:11:53.038 "r_mbytes_per_sec": 0, 00:11:53.038 "w_mbytes_per_sec": 0 00:11:53.038 }, 00:11:53.038 "claimed": false, 00:11:53.038 "zoned": false, 00:11:53.038 "supported_io_types": { 00:11:53.038 "read": true, 00:11:53.038 "write": true, 00:11:53.038 "unmap": true, 00:11:53.038 "flush": false, 00:11:53.038 "reset": true, 00:11:53.038 "nvme_admin": false, 00:11:53.038 "nvme_io": false, 00:11:53.038 "nvme_io_md": false, 00:11:53.038 "write_zeroes": true, 00:11:53.038 "zcopy": false, 00:11:53.038 "get_zone_info": false, 00:11:53.038 "zone_management": false, 00:11:53.038 "zone_append": false, 00:11:53.038 "compare": false, 00:11:53.038 "compare_and_write": false, 00:11:53.038 "abort": false, 00:11:53.038 "seek_hole": true, 00:11:53.038 "seek_data": true, 00:11:53.038 "copy": false, 00:11:53.038 "nvme_iov_md": false 00:11:53.038 }, 00:11:53.038 "driver_specific": { 00:11:53.038 "lvol": { 00:11:53.038 "lvol_store_uuid": "8a877a82-bb7e-476a-967b-31f980b4c0de", 00:11:53.038 "base_bdev": "aio_bdev", 00:11:53.038 "thin_provision": false, 00:11:53.038 "num_allocated_clusters": 38, 00:11:53.038 "snapshot": false, 00:11:53.038 "clone": false, 00:11:53.038 "esnap_clone": false 00:11:53.038 } 00:11:53.038 } 00:11:53.038 } 00:11:53.038 ] 00:11:53.038 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:53.038 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:53.038 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:53.297 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:53.297 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:53.297 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:53.556 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:53.556 13:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 99445fcd-2134-4ce8-b517-976c0a9f80db 00:11:53.556 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8a877a82-bb7e-476a-967b-31f980b4c0de 00:11:53.815 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:54.091 00:11:54.091 real 0m16.931s 00:11:54.091 user 0m43.656s 00:11:54.091 sys 0m3.666s 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:54.091 ************************************ 00:11:54.091 END TEST lvs_grow_dirty 00:11:54.091 ************************************ 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:54.091 nvmf_trace.0 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.091 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.091 rmmod nvme_tcp 00:11:54.091 rmmod nvme_fabrics 00:11:54.350 rmmod nvme_keyring 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 524428 ']' 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 524428 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 524428 ']' 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 524428 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 524428 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 524428' 00:11:54.350 killing process with pid 524428 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 524428 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 524428 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.350 13:43:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.889 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.889 00:11:56.889 real 0m41.653s 00:11:56.889 user 1m4.285s 00:11:56.889 sys 0m10.109s 00:11:56.889 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.889 13:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:56.889 ************************************ 00:11:56.889 END TEST nvmf_lvs_grow 00:11:56.889 ************************************ 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:56.889 ************************************ 00:11:56.889 START TEST nvmf_bdev_io_wait 00:11:56.889 ************************************ 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:56.889 * Looking for test storage... 00:11:56.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:56.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.889 --rc genhtml_branch_coverage=1 00:11:56.889 --rc genhtml_function_coverage=1 00:11:56.889 --rc genhtml_legend=1 00:11:56.889 --rc geninfo_all_blocks=1 00:11:56.889 --rc geninfo_unexecuted_blocks=1 00:11:56.889 00:11:56.889 ' 00:11:56.889 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:56.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.889 --rc genhtml_branch_coverage=1 00:11:56.890 --rc genhtml_function_coverage=1 00:11:56.890 --rc genhtml_legend=1 00:11:56.890 --rc geninfo_all_blocks=1 00:11:56.890 --rc geninfo_unexecuted_blocks=1 00:11:56.890 00:11:56.890 ' 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:56.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.890 --rc genhtml_branch_coverage=1 00:11:56.890 --rc genhtml_function_coverage=1 00:11:56.890 --rc genhtml_legend=1 00:11:56.890 --rc geninfo_all_blocks=1 00:11:56.890 --rc geninfo_unexecuted_blocks=1 00:11:56.890 00:11:56.890 ' 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:56.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.890 --rc genhtml_branch_coverage=1 00:11:56.890 --rc genhtml_function_coverage=1 00:11:56.890 --rc genhtml_legend=1 00:11:56.890 --rc geninfo_all_blocks=1 00:11:56.890 --rc geninfo_unexecuted_blocks=1 00:11:56.890 00:11:56.890 ' 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.890 13:43:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:03.459 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:03.459 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:03.459 Found net devices under 0000:86:00.0: cvl_0_0 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.459 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:03.459 Found net devices under 0000:86:00.1: cvl_0_1 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:03.460 13:43:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:03.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:12:03.460 00:12:03.460 --- 10.0.0.2 ping statistics --- 00:12:03.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.460 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:12:03.460 00:12:03.460 --- 10.0.0.1 ping statistics --- 00:12:03.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.460 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=528539 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 528539 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 528539 ']' 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:03.460 [2024-12-05 13:43:45.333594] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:12:03.460 [2024-12-05 13:43:45.333652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.460 [2024-12-05 13:43:45.413414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.460 [2024-12-05 13:43:45.454631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.460 [2024-12-05 13:43:45.454672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.460 [2024-12-05 13:43:45.454680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.460 [2024-12-05 13:43:45.454686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.460 [2024-12-05 13:43:45.454691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.460 [2024-12-05 13:43:45.456146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.460 [2024-12-05 13:43:45.456277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.460 [2024-12-05 13:43:45.456411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.460 [2024-12-05 13:43:45.456412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:03.460 [2024-12-05 13:43:45.608563] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:03.460 Malloc0 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:03.460 [2024-12-05 13:43:45.663618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=528737 00:12:03.460 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=528739 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:03.461 { 00:12:03.461 "params": { 00:12:03.461 "name": "Nvme$subsystem", 00:12:03.461 "trtype": "$TEST_TRANSPORT", 00:12:03.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:03.461 "adrfam": "ipv4", 00:12:03.461 "trsvcid": "$NVMF_PORT", 00:12:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:03.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:03.461 "hdgst": ${hdgst:-false}, 00:12:03.461 "ddgst": ${ddgst:-false} 00:12:03.461 }, 00:12:03.461 "method": "bdev_nvme_attach_controller" 00:12:03.461 } 00:12:03.461 EOF 00:12:03.461 )") 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=528741 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:03.461 { 00:12:03.461 "params": { 00:12:03.461 "name": "Nvme$subsystem", 00:12:03.461 "trtype": "$TEST_TRANSPORT", 00:12:03.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:03.461 "adrfam": "ipv4", 00:12:03.461 "trsvcid": "$NVMF_PORT", 00:12:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:03.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:03.461 "hdgst": ${hdgst:-false}, 00:12:03.461 "ddgst": ${ddgst:-false} 00:12:03.461 }, 00:12:03.461 "method": "bdev_nvme_attach_controller" 00:12:03.461 } 00:12:03.461 EOF 00:12:03.461 )") 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=528744 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:03.461 { 00:12:03.461 "params": { 00:12:03.461 "name": "Nvme$subsystem", 00:12:03.461 "trtype": "$TEST_TRANSPORT", 00:12:03.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:03.461 "adrfam": "ipv4", 00:12:03.461 "trsvcid": "$NVMF_PORT", 00:12:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:03.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:03.461 "hdgst": ${hdgst:-false}, 00:12:03.461 "ddgst": ${ddgst:-false} 00:12:03.461 }, 00:12:03.461 "method": "bdev_nvme_attach_controller" 00:12:03.461 } 00:12:03.461 EOF 00:12:03.461 )") 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:03.461 { 00:12:03.461 "params": { 00:12:03.461 "name": "Nvme$subsystem", 00:12:03.461 "trtype": "$TEST_TRANSPORT", 00:12:03.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:03.461 "adrfam": "ipv4", 00:12:03.461 "trsvcid": "$NVMF_PORT", 00:12:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:03.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:03.461 "hdgst": ${hdgst:-false}, 00:12:03.461 "ddgst": ${ddgst:-false} 00:12:03.461 }, 00:12:03.461 "method": "bdev_nvme_attach_controller" 00:12:03.461 } 00:12:03.461 EOF 00:12:03.461 )") 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 528737 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:03.461 "params": { 00:12:03.461 "name": "Nvme1", 00:12:03.461 "trtype": "tcp", 00:12:03.461 "traddr": "10.0.0.2", 00:12:03.461 "adrfam": "ipv4", 00:12:03.461 "trsvcid": "4420", 00:12:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:03.461 "hdgst": false, 00:12:03.461 "ddgst": false 00:12:03.461 }, 00:12:03.461 "method": "bdev_nvme_attach_controller" 00:12:03.461 }' 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:03.461 "params": { 00:12:03.461 "name": "Nvme1", 00:12:03.461 "trtype": "tcp", 00:12:03.461 "traddr": "10.0.0.2", 00:12:03.461 "adrfam": "ipv4", 00:12:03.461 "trsvcid": "4420", 00:12:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:03.461 "hdgst": false, 00:12:03.461 "ddgst": false 00:12:03.461 }, 00:12:03.461 "method": "bdev_nvme_attach_controller" 00:12:03.461 }' 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:03.461 "params": { 00:12:03.461 "name": "Nvme1", 00:12:03.461 "trtype": "tcp", 00:12:03.461 "traddr": "10.0.0.2", 00:12:03.461 "adrfam": "ipv4", 00:12:03.461 "trsvcid": "4420", 00:12:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:03.461 "hdgst": false, 00:12:03.461 "ddgst": false 00:12:03.461 }, 00:12:03.461 "method": "bdev_nvme_attach_controller" 00:12:03.461 }' 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:03.461 13:43:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:03.461 "params": { 00:12:03.461 "name": "Nvme1", 00:12:03.461 "trtype": "tcp", 00:12:03.461 "traddr": "10.0.0.2", 00:12:03.461 "adrfam": "ipv4", 00:12:03.461 "trsvcid": "4420", 00:12:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:03.461 "hdgst": false, 00:12:03.461 "ddgst": false 00:12:03.461 }, 00:12:03.461 "method": "bdev_nvme_attach_controller" 00:12:03.461 }' 00:12:03.461 [2024-12-05 13:43:45.714232] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:12:03.461 [2024-12-05 13:43:45.714281] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:03.461 [2024-12-05 13:43:45.717556] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:12:03.461 [2024-12-05 13:43:45.717598] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:03.461 [2024-12-05 13:43:45.717808] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:12:03.461 [2024-12-05 13:43:45.717844] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:03.461 [2024-12-05 13:43:45.718957] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:12:03.461 [2024-12-05 13:43:45.718997] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:03.461 [2024-12-05 13:43:45.898834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.461 [2024-12-05 13:43:45.941213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:03.461 [2024-12-05 13:43:45.994436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.719 [2024-12-05 13:43:46.044120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.719 [2024-12-05 13:43:46.048082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:03.719 [2024-12-05 13:43:46.084202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:12:03.719 [2024-12-05 13:43:46.104211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.719 [2024-12-05 13:43:46.146315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:03.719 Running I/O for 1 seconds... 00:12:03.719 Running I/O for 1 seconds... 00:12:03.719 Running I/O for 1 seconds... 00:12:03.975 Running I/O for 1 seconds... 00:12:04.904 14242.00 IOPS, 55.63 MiB/s 00:12:04.904 Latency(us) 00:12:04.904 [2024-12-05T12:43:47.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.904 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:04.904 Nvme1n1 : 1.01 14304.37 55.88 0.00 0.00 8922.68 4056.99 15978.30 00:12:04.904 [2024-12-05T12:43:47.491Z] =================================================================================================================== 00:12:04.904 [2024-12-05T12:43:47.491Z] Total : 14304.37 55.88 0.00 0.00 8922.68 4056.99 15978.30 00:12:04.904 6481.00 IOPS, 25.32 MiB/s 00:12:04.904 Latency(us) 00:12:04.904 [2024-12-05T12:43:47.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.904 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:04.904 Nvme1n1 : 1.01 6531.83 25.51 0.00 0.00 19495.12 8862.96 29335.16 00:12:04.904 [2024-12-05T12:43:47.491Z] =================================================================================================================== 00:12:04.904 [2024-12-05T12:43:47.491Z] Total : 6531.83 25.51 0.00 0.00 19495.12 8862.96 29335.16 00:12:04.904 243416.00 IOPS, 950.84 MiB/s 00:12:04.904 Latency(us) 00:12:04.904 [2024-12-05T12:43:47.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.904 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:04.904 Nvme1n1 : 1.00 243050.19 949.41 0.00 0.00 523.79 221.38 1490.16 00:12:04.904 [2024-12-05T12:43:47.491Z] =================================================================================================================== 00:12:04.904 [2024-12-05T12:43:47.491Z] Total : 243050.19 949.41 0.00 0.00 523.79 221.38 1490.16 00:12:04.904 6720.00 IOPS, 26.25 MiB/s 00:12:04.904 Latency(us) 00:12:04.904 [2024-12-05T12:43:47.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.904 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:04.904 Nvme1n1 : 1.01 6813.72 26.62 0.00 0.00 18731.29 4681.14 42941.68 00:12:04.904 [2024-12-05T12:43:47.491Z] =================================================================================================================== 00:12:04.904 [2024-12-05T12:43:47.491Z] Total : 6813.72 26.62 0.00 0.00 18731.29 4681.14 42941.68 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 528739 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 528741 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 528744 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.904 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:04.904 rmmod nvme_tcp 00:12:04.904 rmmod nvme_fabrics 00:12:05.163 rmmod nvme_keyring 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 528539 ']' 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 528539 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 528539 ']' 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 528539 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528539 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528539' 00:12:05.163 killing process with pid 528539 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 528539 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 528539 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.163 13:43:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.695 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.695 00:12:07.695 real 0m10.732s 00:12:07.695 user 0m15.780s 00:12:07.695 sys 0m6.082s 00:12:07.695 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.695 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:07.695 ************************************ 00:12:07.695 END TEST nvmf_bdev_io_wait 00:12:07.695 ************************************ 00:12:07.695 13:43:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:07.695 13:43:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:07.695 13:43:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.695 13:43:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:07.695 ************************************ 00:12:07.695 START TEST nvmf_queue_depth 00:12:07.695 ************************************ 00:12:07.695 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:07.695 * Looking for test storage... 00:12:07.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.695 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:07.695 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:12:07.695 13:43:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:07.695 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:07.695 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.695 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.695 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.695 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.695 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.695 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.695 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.695 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.695 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:07.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.696 --rc genhtml_branch_coverage=1 00:12:07.696 --rc genhtml_function_coverage=1 00:12:07.696 --rc genhtml_legend=1 00:12:07.696 --rc geninfo_all_blocks=1 00:12:07.696 --rc geninfo_unexecuted_blocks=1 00:12:07.696 00:12:07.696 ' 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:07.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.696 --rc genhtml_branch_coverage=1 00:12:07.696 --rc genhtml_function_coverage=1 00:12:07.696 --rc genhtml_legend=1 00:12:07.696 --rc geninfo_all_blocks=1 00:12:07.696 --rc geninfo_unexecuted_blocks=1 00:12:07.696 00:12:07.696 ' 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:07.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.696 --rc genhtml_branch_coverage=1 00:12:07.696 --rc genhtml_function_coverage=1 00:12:07.696 --rc genhtml_legend=1 00:12:07.696 --rc geninfo_all_blocks=1 00:12:07.696 --rc geninfo_unexecuted_blocks=1 00:12:07.696 00:12:07.696 ' 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:07.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.696 --rc genhtml_branch_coverage=1 00:12:07.696 --rc genhtml_function_coverage=1 00:12:07.696 --rc genhtml_legend=1 00:12:07.696 --rc geninfo_all_blocks=1 00:12:07.696 --rc geninfo_unexecuted_blocks=1 00:12:07.696 00:12:07.696 ' 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:07.696 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:07.697 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:07.697 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.697 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:07.697 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:07.697 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:07.697 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.697 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.697 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.697 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:07.697 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:07.697 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.697 13:43:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:14.269 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:14.269 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:14.269 Found net devices under 0000:86:00.0: cvl_0_0 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:14.269 Found net devices under 0000:86:00.1: cvl_0_1 00:12:14.269 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:14.270 13:43:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:14.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:12:14.270 00:12:14.270 --- 10.0.0.2 ping statistics --- 00:12:14.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.270 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:12:14.270 00:12:14.270 --- 10.0.0.1 ping statistics --- 00:12:14.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.270 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=532538 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 532538 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 532538 ']' 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.270 [2024-12-05 13:43:56.171576] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:12:14.270 [2024-12-05 13:43:56.171623] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.270 [2024-12-05 13:43:56.251918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.270 [2024-12-05 13:43:56.291769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.270 [2024-12-05 13:43:56.291806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.270 [2024-12-05 13:43:56.291813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.270 [2024-12-05 13:43:56.291819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.270 [2024-12-05 13:43:56.291824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.270 [2024-12-05 13:43:56.292359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.270 [2024-12-05 13:43:56.426907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.270 Malloc0 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.270 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.271 [2024-12-05 13:43:56.477194] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=532608 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 532608 /var/tmp/bdevperf.sock 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 532608 ']' 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:14.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.271 [2024-12-05 13:43:56.529830] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:12:14.271 [2024-12-05 13:43:56.529871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid532608 ] 00:12:14.271 [2024-12-05 13:43:56.605260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.271 [2024-12-05 13:43:56.647499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.271 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.529 NVMe0n1 00:12:14.529 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.529 13:43:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:14.529 Running I/O for 10 seconds... 00:12:16.844 11935.00 IOPS, 46.62 MiB/s [2024-12-05T12:44:00.366Z] 12279.00 IOPS, 47.96 MiB/s [2024-12-05T12:44:01.299Z] 12373.00 IOPS, 48.33 MiB/s [2024-12-05T12:44:02.230Z] 12353.25 IOPS, 48.25 MiB/s [2024-12-05T12:44:03.162Z] 12464.00 IOPS, 48.69 MiB/s [2024-12-05T12:44:04.100Z] 12447.83 IOPS, 48.62 MiB/s [2024-12-05T12:44:05.476Z] 12486.29 IOPS, 48.77 MiB/s [2024-12-05T12:44:06.412Z] 12516.50 IOPS, 48.89 MiB/s [2024-12-05T12:44:07.349Z] 12510.22 IOPS, 48.87 MiB/s [2024-12-05T12:44:07.349Z] 12554.40 IOPS, 49.04 MiB/s 00:12:24.762 Latency(us) 00:12:24.762 [2024-12-05T12:44:07.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.762 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:24.762 Verification LBA range: start 0x0 length 0x4000 00:12:24.762 NVMe0n1 : 10.06 12570.98 49.11 0.00 0.00 81182.95 18849.40 55175.07 00:12:24.762 [2024-12-05T12:44:07.349Z] =================================================================================================================== 00:12:24.762 [2024-12-05T12:44:07.349Z] Total : 12570.98 49.11 0.00 0.00 81182.95 18849.40 55175.07 00:12:24.762 { 00:12:24.762 "results": [ 00:12:24.762 { 00:12:24.762 "job": "NVMe0n1", 00:12:24.762 "core_mask": "0x1", 00:12:24.762 "workload": "verify", 00:12:24.762 "status": "finished", 00:12:24.762 "verify_range": { 00:12:24.762 "start": 0, 00:12:24.762 "length": 16384 00:12:24.762 }, 00:12:24.762 "queue_depth": 1024, 00:12:24.762 "io_size": 4096, 00:12:24.762 "runtime": 10.064606, 00:12:24.762 "iops": 12570.983901406573, 00:12:24.762 "mibps": 49.105405864869425, 00:12:24.762 "io_failed": 0, 00:12:24.762 "io_timeout": 0, 00:12:24.762 "avg_latency_us": 81182.94748001665, 00:12:24.762 "min_latency_us": 18849.401904761904, 00:12:24.762 "max_latency_us": 55175.07047619048 00:12:24.762 } 00:12:24.762 ], 00:12:24.762 "core_count": 1 00:12:24.762 } 00:12:24.762 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 532608 00:12:24.762 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 532608 ']' 00:12:24.762 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 532608 00:12:24.762 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:24.762 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.762 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 532608 00:12:24.762 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.762 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.762 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 532608' 00:12:24.762 killing process with pid 532608 00:12:24.762 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 532608 00:12:24.762 Received shutdown signal, test time was about 10.000000 seconds 00:12:24.762 00:12:24.762 Latency(us) 00:12:24.762 [2024-12-05T12:44:07.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.762 [2024-12-05T12:44:07.349Z] =================================================================================================================== 00:12:24.762 [2024-12-05T12:44:07.349Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:24.762 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 532608 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:25.022 rmmod nvme_tcp 00:12:25.022 rmmod nvme_fabrics 00:12:25.022 rmmod nvme_keyring 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 532538 ']' 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 532538 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 532538 ']' 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 532538 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 532538 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 532538' 00:12:25.022 killing process with pid 532538 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 532538 00:12:25.022 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 532538 00:12:25.281 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:25.281 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:25.281 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:25.281 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:25.281 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:12:25.281 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:25.281 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:12:25.281 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:25.281 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:25.281 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.281 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.281 13:44:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.189 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:27.189 00:12:27.189 real 0m19.885s 00:12:27.189 user 0m23.224s 00:12:27.189 sys 0m6.127s 00:12:27.189 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.189 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:27.189 ************************************ 00:12:27.189 END TEST nvmf_queue_depth 00:12:27.189 ************************************ 00:12:27.449 13:44:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:27.449 13:44:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:27.449 13:44:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.449 13:44:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:27.449 ************************************ 00:12:27.449 START TEST nvmf_target_multipath 00:12:27.449 ************************************ 00:12:27.449 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:27.449 * Looking for test storage... 00:12:27.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.449 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:27.449 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:12:27.449 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:27.449 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:27.449 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.449 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:27.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.450 --rc genhtml_branch_coverage=1 00:12:27.450 --rc genhtml_function_coverage=1 00:12:27.450 --rc genhtml_legend=1 00:12:27.450 --rc geninfo_all_blocks=1 00:12:27.450 --rc geninfo_unexecuted_blocks=1 00:12:27.450 00:12:27.450 ' 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:27.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.450 --rc genhtml_branch_coverage=1 00:12:27.450 --rc genhtml_function_coverage=1 00:12:27.450 --rc genhtml_legend=1 00:12:27.450 --rc geninfo_all_blocks=1 00:12:27.450 --rc geninfo_unexecuted_blocks=1 00:12:27.450 00:12:27.450 ' 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:27.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.450 --rc genhtml_branch_coverage=1 00:12:27.450 --rc genhtml_function_coverage=1 00:12:27.450 --rc genhtml_legend=1 00:12:27.450 --rc geninfo_all_blocks=1 00:12:27.450 --rc geninfo_unexecuted_blocks=1 00:12:27.450 00:12:27.450 ' 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:27.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.450 --rc genhtml_branch_coverage=1 00:12:27.450 --rc genhtml_function_coverage=1 00:12:27.450 --rc genhtml_legend=1 00:12:27.450 --rc geninfo_all_blocks=1 00:12:27.450 --rc geninfo_unexecuted_blocks=1 00:12:27.450 00:12:27.450 ' 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.450 13:44:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:27.450 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:27.451 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.451 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:27.451 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:27.451 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:27.451 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.451 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.451 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.451 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:27.451 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:27.451 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:12:27.451 13:44:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:34.021 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.021 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:34.022 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:34.022 Found net devices under 0000:86:00.0: cvl_0_0 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:34.022 Found net devices under 0000:86:00.1: cvl_0_1 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:34.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:12:34.022 00:12:34.022 --- 10.0.0.2 ping statistics --- 00:12:34.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.022 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:12:34.022 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:12:34.022 00:12:34.023 --- 10.0.0.1 ping statistics --- 00:12:34.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.023 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:12:34.023 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.023 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:12:34.023 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:34.023 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.023 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:34.023 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:34.023 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.023 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:34.023 13:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:34.023 only one NIC for nvmf test 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.023 rmmod nvme_tcp 00:12:34.023 rmmod nvme_fabrics 00:12:34.023 rmmod nvme_keyring 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.023 13:44:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.929 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.929 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:35.929 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:35.929 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:35.929 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:35.929 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.930 00:12:35.930 real 0m8.398s 00:12:35.930 user 0m1.810s 00:12:35.930 sys 0m4.582s 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:35.930 ************************************ 00:12:35.930 END TEST nvmf_target_multipath 00:12:35.930 ************************************ 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:35.930 ************************************ 00:12:35.930 START TEST nvmf_zcopy 00:12:35.930 ************************************ 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:35.930 * Looking for test storage... 00:12:35.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:35.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.930 --rc genhtml_branch_coverage=1 00:12:35.930 --rc genhtml_function_coverage=1 00:12:35.930 --rc genhtml_legend=1 00:12:35.930 --rc geninfo_all_blocks=1 00:12:35.930 --rc geninfo_unexecuted_blocks=1 00:12:35.930 00:12:35.930 ' 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:35.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.930 --rc genhtml_branch_coverage=1 00:12:35.930 --rc genhtml_function_coverage=1 00:12:35.930 --rc genhtml_legend=1 00:12:35.930 --rc geninfo_all_blocks=1 00:12:35.930 --rc geninfo_unexecuted_blocks=1 00:12:35.930 00:12:35.930 ' 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:35.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.930 --rc genhtml_branch_coverage=1 00:12:35.930 --rc genhtml_function_coverage=1 00:12:35.930 --rc genhtml_legend=1 00:12:35.930 --rc geninfo_all_blocks=1 00:12:35.930 --rc geninfo_unexecuted_blocks=1 00:12:35.930 00:12:35.930 ' 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:35.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.930 --rc genhtml_branch_coverage=1 00:12:35.930 --rc genhtml_function_coverage=1 00:12:35.930 --rc genhtml_legend=1 00:12:35.930 --rc geninfo_all_blocks=1 00:12:35.930 --rc geninfo_unexecuted_blocks=1 00:12:35.930 00:12:35.930 ' 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.930 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:35.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:12:35.931 13:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:42.493 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:42.493 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:42.493 Found net devices under 0000:86:00.0: cvl_0_0 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:42.493 Found net devices under 0000:86:00.1: cvl_0_1 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.493 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:42.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:12:42.494 00:12:42.494 --- 10.0.0.2 ping statistics --- 00:12:42.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.494 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:12:42.494 00:12:42.494 --- 10.0.0.1 ping statistics --- 00:12:42.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.494 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=541562 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 541562 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 541562 ']' 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.494 [2024-12-05 13:44:24.536325] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:12:42.494 [2024-12-05 13:44:24.536382] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.494 [2024-12-05 13:44:24.616967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.494 [2024-12-05 13:44:24.659481] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.494 [2024-12-05 13:44:24.659519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.494 [2024-12-05 13:44:24.659527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.494 [2024-12-05 13:44:24.659534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.494 [2024-12-05 13:44:24.659540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.494 [2024-12-05 13:44:24.660070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.494 [2024-12-05 13:44:24.804955] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.494 [2024-12-05 13:44:24.829179] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.494 malloc0 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:42.494 { 00:12:42.494 "params": { 00:12:42.494 "name": "Nvme$subsystem", 00:12:42.494 "trtype": "$TEST_TRANSPORT", 00:12:42.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:42.494 "adrfam": "ipv4", 00:12:42.494 "trsvcid": "$NVMF_PORT", 00:12:42.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:42.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:42.494 "hdgst": ${hdgst:-false}, 00:12:42.494 "ddgst": ${ddgst:-false} 00:12:42.494 }, 00:12:42.494 "method": "bdev_nvme_attach_controller" 00:12:42.494 } 00:12:42.494 EOF 00:12:42.494 )") 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:42.494 13:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:42.494 "params": { 00:12:42.494 "name": "Nvme1", 00:12:42.494 "trtype": "tcp", 00:12:42.494 "traddr": "10.0.0.2", 00:12:42.494 "adrfam": "ipv4", 00:12:42.494 "trsvcid": "4420", 00:12:42.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:42.495 "hdgst": false, 00:12:42.495 "ddgst": false 00:12:42.495 }, 00:12:42.495 "method": "bdev_nvme_attach_controller" 00:12:42.495 }' 00:12:42.495 [2024-12-05 13:44:24.919561] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:12:42.495 [2024-12-05 13:44:24.919606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid541708 ] 00:12:42.495 [2024-12-05 13:44:24.992164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.495 [2024-12-05 13:44:25.032760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.061 Running I/O for 10 seconds... 00:12:45.049 8760.00 IOPS, 68.44 MiB/s [2024-12-05T12:44:28.574Z] 8843.50 IOPS, 69.09 MiB/s [2024-12-05T12:44:29.508Z] 8875.00 IOPS, 69.34 MiB/s [2024-12-05T12:44:30.445Z] 8900.50 IOPS, 69.54 MiB/s [2024-12-05T12:44:31.383Z] 8894.60 IOPS, 69.49 MiB/s [2024-12-05T12:44:32.760Z] 8901.00 IOPS, 69.54 MiB/s [2024-12-05T12:44:33.697Z] 8909.71 IOPS, 69.61 MiB/s [2024-12-05T12:44:34.634Z] 8916.88 IOPS, 69.66 MiB/s [2024-12-05T12:44:35.567Z] 8922.67 IOPS, 69.71 MiB/s [2024-12-05T12:44:35.567Z] 8928.10 IOPS, 69.75 MiB/s 00:12:52.980 Latency(us) 00:12:52.980 [2024-12-05T12:44:35.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.980 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:52.980 Verification LBA range: start 0x0 length 0x1000 00:12:52.980 Nvme1n1 : 10.01 8928.52 69.75 0.00 0.00 14294.91 518.83 21346.01 00:12:52.980 [2024-12-05T12:44:35.567Z] =================================================================================================================== 00:12:52.980 [2024-12-05T12:44:35.567Z] Total : 8928.52 69.75 0.00 0.00 14294.91 518.83 21346.01 00:12:52.980 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=543390 00:12:52.980 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:52.980 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:52.981 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:52.981 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:52.981 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:52.981 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:52.981 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:52.981 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:52.981 { 00:12:52.981 "params": { 00:12:52.981 "name": "Nvme$subsystem", 00:12:52.981 "trtype": "$TEST_TRANSPORT", 00:12:52.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:52.981 "adrfam": "ipv4", 00:12:52.981 "trsvcid": "$NVMF_PORT", 00:12:52.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:52.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:52.981 "hdgst": ${hdgst:-false}, 00:12:52.981 "ddgst": ${ddgst:-false} 00:12:52.981 }, 00:12:52.981 "method": "bdev_nvme_attach_controller" 00:12:52.981 } 00:12:52.981 EOF 00:12:52.981 )") 00:12:52.981 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:52.981 [2024-12-05 13:44:35.556165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.981 [2024-12-05 13:44:35.556199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.981 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:52.981 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:52.981 13:44:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:52.981 "params": { 00:12:52.981 "name": "Nvme1", 00:12:52.981 "trtype": "tcp", 00:12:52.981 "traddr": "10.0.0.2", 00:12:52.981 "adrfam": "ipv4", 00:12:52.981 "trsvcid": "4420", 00:12:52.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:52.981 "hdgst": false, 00:12:52.981 "ddgst": false 00:12:52.981 }, 00:12:52.981 "method": "bdev_nvme_attach_controller" 00:12:52.981 }' 00:12:53.238 [2024-12-05 13:44:35.568166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.238 [2024-12-05 13:44:35.568182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.238 [2024-12-05 13:44:35.580184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.238 [2024-12-05 13:44:35.580194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.238 [2024-12-05 13:44:35.591265] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:12:53.238 [2024-12-05 13:44:35.591304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid543390 ] 00:12:53.238 [2024-12-05 13:44:35.592216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.238 [2024-12-05 13:44:35.592226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.238 [2024-12-05 13:44:35.604247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.238 [2024-12-05 13:44:35.604257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.238 [2024-12-05 13:44:35.616279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.238 [2024-12-05 13:44:35.616290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.238 [2024-12-05 13:44:35.628310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.238 [2024-12-05 13:44:35.628320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.238 [2024-12-05 13:44:35.640341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.238 [2024-12-05 13:44:35.640352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.238 [2024-12-05 13:44:35.652381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.238 [2024-12-05 13:44:35.652391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.238 [2024-12-05 13:44:35.664411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.238 [2024-12-05 13:44:35.664420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-12-05 13:44:35.665562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.239 [2024-12-05 13:44:35.676442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-12-05 13:44:35.676457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-12-05 13:44:35.688475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-12-05 13:44:35.688487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-12-05 13:44:35.700508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-12-05 13:44:35.700518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-12-05 13:44:35.706939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.239 [2024-12-05 13:44:35.712541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-12-05 13:44:35.712553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-12-05 13:44:35.724587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-12-05 13:44:35.724608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-12-05 13:44:35.736612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-12-05 13:44:35.736629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-12-05 13:44:35.748643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-12-05 13:44:35.748658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-12-05 13:44:35.760673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-12-05 13:44:35.760686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-12-05 13:44:35.772702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-12-05 13:44:35.772716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-12-05 13:44:35.784730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-12-05 13:44:35.784740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-12-05 13:44:35.796786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-12-05 13:44:35.796808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-12-05 13:44:35.808826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-12-05 13:44:35.808841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.239 [2024-12-05 13:44:35.820852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.239 [2024-12-05 13:44:35.820866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 [2024-12-05 13:44:35.832887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.496 [2024-12-05 13:44:35.832898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 [2024-12-05 13:44:35.844913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.496 [2024-12-05 13:44:35.844922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 [2024-12-05 13:44:35.856947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.496 [2024-12-05 13:44:35.856957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 [2024-12-05 13:44:35.869009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.496 [2024-12-05 13:44:35.869023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 [2024-12-05 13:44:35.881033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.496 [2024-12-05 13:44:35.881047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 [2024-12-05 13:44:35.931598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.496 [2024-12-05 13:44:35.931618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 [2024-12-05 13:44:35.941207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.496 [2024-12-05 13:44:35.941218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 Running I/O for 5 seconds... 00:12:53.496 [2024-12-05 13:44:35.957411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.496 [2024-12-05 13:44:35.957429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 [2024-12-05 13:44:35.971093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.496 [2024-12-05 13:44:35.971111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 [2024-12-05 13:44:35.984728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.496 [2024-12-05 13:44:35.984746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 [2024-12-05 13:44:35.998743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.496 [2024-12-05 13:44:35.998761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 [2024-12-05 13:44:36.010147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.496 [2024-12-05 13:44:36.010165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 [2024-12-05 13:44:36.024044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.496 [2024-12-05 13:44:36.024063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.496 [2024-12-05 13:44:36.037601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.497 [2024-12-05 13:44:36.037620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.497 [2024-12-05 13:44:36.051512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.497 [2024-12-05 13:44:36.051532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.497 [2024-12-05 13:44:36.060381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.497 [2024-12-05 13:44:36.060399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.497 [2024-12-05 13:44:36.069511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.497 [2024-12-05 13:44:36.069529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.084131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.084150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.095289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.095313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.104646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.104664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.118694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.118712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.132337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.132356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.146160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.146179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.159511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.159529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.173092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.173111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.187066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.187084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.200312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.200332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.213870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.213889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.227358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.227382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.240876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.240894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.249750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.249768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.259125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.259147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.273449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.273467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.286937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.286954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.300743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.300760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.314200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.314218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.754 [2024-12-05 13:44:36.327689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.754 [2024-12-05 13:44:36.327707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.341813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.341831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.355787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.355805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.366424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.366442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.380504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.380522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.393820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.393838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.407383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.407401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.420945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.420968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.434025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.434044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.447710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.447727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.461021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.461039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.474397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.474415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.487958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.487979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.501197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.501215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.515094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.515120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.528556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.528574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.542271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.542289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.550981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.550998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.559618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.559635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.568964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.568982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.578239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.578256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.013 [2024-12-05 13:44:36.592484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.013 [2024-12-05 13:44:36.592502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.605965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.605982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.619446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.619464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.632991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.633009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.642429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.642448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.656387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.656405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.665251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.665269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.674412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.674430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.684011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.684029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.692686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.692703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.706731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.706750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.720300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.720318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.733873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.733896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.747326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.747345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.760599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.760621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.774472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.774491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.787579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.787597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.801046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.801063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.815009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.815028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.828620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.828638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.842059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.842078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-12-05 13:44:36.856099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-12-05 13:44:36.856118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.530 [2024-12-05 13:44:36.870131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.530 [2024-12-05 13:44:36.870150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.530 [2024-12-05 13:44:36.879084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.530 [2024-12-05 13:44:36.879102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.530 [2024-12-05 13:44:36.893076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.530 [2024-12-05 13:44:36.893094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.530 [2024-12-05 13:44:36.906969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.530 [2024-12-05 13:44:36.906989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.530 [2024-12-05 13:44:36.920717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.530 [2024-12-05 13:44:36.920736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.530 [2024-12-05 13:44:36.934148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.530 [2024-12-05 13:44:36.934166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.530 [2024-12-05 13:44:36.947860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.530 [2024-12-05 13:44:36.947878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.530 17022.00 IOPS, 132.98 MiB/s [2024-12-05T12:44:37.117Z] [2024-12-05 13:44:36.961620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.530 [2024-12-05 13:44:36.961638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.530 [2024-12-05 13:44:36.974812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-12-05 13:44:36.974830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-12-05 13:44:36.988354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-12-05 13:44:36.988379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-12-05 13:44:37.001725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-12-05 13:44:37.001743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-12-05 13:44:37.015313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-12-05 13:44:37.015332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-12-05 13:44:37.028763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-12-05 13:44:37.028783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-12-05 13:44:37.042157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-12-05 13:44:37.042176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-12-05 13:44:37.055777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-12-05 13:44:37.055796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-12-05 13:44:37.069384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-12-05 13:44:37.069403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-12-05 13:44:37.083020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-12-05 13:44:37.083039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-12-05 13:44:37.096800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-12-05 13:44:37.096819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-12-05 13:44:37.110187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-12-05 13:44:37.110205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.788 [2024-12-05 13:44:37.123893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.788 [2024-12-05 13:44:37.123912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.788 [2024-12-05 13:44:37.137913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.788 [2024-12-05 13:44:37.137932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.788 [2024-12-05 13:44:37.148889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.788 [2024-12-05 13:44:37.148908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.788 [2024-12-05 13:44:37.163013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.788 [2024-12-05 13:44:37.163031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.176071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.176089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.185080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.185098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.199333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.199352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.213026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.213044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.226639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.226657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.239583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.239601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.253317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.253335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.267673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.267691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.281337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.281354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.294805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.294823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.308428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.308445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.321780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.321797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.330553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.330571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.339738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.339756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.348843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.348861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.789 [2024-12-05 13:44:37.362958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.789 [2024-12-05 13:44:37.362977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.046 [2024-12-05 13:44:37.376267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.046 [2024-12-05 13:44:37.376286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.046 [2024-12-05 13:44:37.390263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.046 [2024-12-05 13:44:37.390281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.046 [2024-12-05 13:44:37.399292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.046 [2024-12-05 13:44:37.399309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.046 [2024-12-05 13:44:37.413564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.046 [2024-12-05 13:44:37.413582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.046 [2024-12-05 13:44:37.427532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.046 [2024-12-05 13:44:37.427552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.046 [2024-12-05 13:44:37.441017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.046 [2024-12-05 13:44:37.441035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.046 [2024-12-05 13:44:37.454523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.046 [2024-12-05 13:44:37.454541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.046 [2024-12-05 13:44:37.467759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.046 [2024-12-05 13:44:37.467777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.046 [2024-12-05 13:44:37.481260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.047 [2024-12-05 13:44:37.481278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.047 [2024-12-05 13:44:37.494861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.047 [2024-12-05 13:44:37.494879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.047 [2024-12-05 13:44:37.508767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.047 [2024-12-05 13:44:37.508785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.047 [2024-12-05 13:44:37.522818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.047 [2024-12-05 13:44:37.522836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.047 [2024-12-05 13:44:37.535986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.047 [2024-12-05 13:44:37.536004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.047 [2024-12-05 13:44:37.549522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.047 [2024-12-05 13:44:37.549540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.047 [2024-12-05 13:44:37.563718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.047 [2024-12-05 13:44:37.563737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.047 [2024-12-05 13:44:37.577694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.047 [2024-12-05 13:44:37.577712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.047 [2024-12-05 13:44:37.591371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.047 [2024-12-05 13:44:37.591389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.047 [2024-12-05 13:44:37.605938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.047 [2024-12-05 13:44:37.605957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.047 [2024-12-05 13:44:37.616267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.047 [2024-12-05 13:44:37.616285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.047 [2024-12-05 13:44:37.630103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.047 [2024-12-05 13:44:37.630121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.644148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.644167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.658155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.658172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.671905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.671923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.681435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.681453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.694938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.694956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.704249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.704266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.718314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.718337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.732803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.732821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.746522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.746540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.759880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.759898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.773659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.773677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.787382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.787399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.796251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.796268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.805650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.805668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.819625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.819643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.833017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.833034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.846458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.846477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.860293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.860310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.874322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.874341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.305 [2024-12-05 13:44:37.887982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.305 [2024-12-05 13:44:37.888000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.562 [2024-12-05 13:44:37.901534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.562 [2024-12-05 13:44:37.901552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.562 [2024-12-05 13:44:37.915263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.562 [2024-12-05 13:44:37.915280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.562 [2024-12-05 13:44:37.928837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.562 [2024-12-05 13:44:37.928856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.562 [2024-12-05 13:44:37.938308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.562 [2024-12-05 13:44:37.938328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.562 17137.00 IOPS, 133.88 MiB/s [2024-12-05T12:44:38.149Z] [2024-12-05 13:44:37.952740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.562 [2024-12-05 13:44:37.952759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.562 [2024-12-05 13:44:37.966010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.562 [2024-12-05 13:44:37.966033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.563 [2024-12-05 13:44:37.979663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.563 [2024-12-05 13:44:37.979680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.563 [2024-12-05 13:44:37.993080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.563 [2024-12-05 13:44:37.993097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.563 [2024-12-05 13:44:38.006851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.563 [2024-12-05 13:44:38.006869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.563 [2024-12-05 13:44:38.020283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.563 [2024-12-05 13:44:38.020301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.563 [2024-12-05 13:44:38.033875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.563 [2024-12-05 13:44:38.033893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.563 [2024-12-05 13:44:38.047431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.563 [2024-12-05 13:44:38.047449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.563 [2024-12-05 13:44:38.061254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.563 [2024-12-05 13:44:38.061272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.563 [2024-12-05 13:44:38.074946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.563 [2024-12-05 13:44:38.074964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.563 [2024-12-05 13:44:38.088773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.563 [2024-12-05 13:44:38.088791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.563 [2024-12-05 13:44:38.102309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.563 [2024-12-05 13:44:38.102327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.563 [2024-12-05 13:44:38.116257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.563 [2024-12-05 13:44:38.116275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.563 [2024-12-05 13:44:38.130202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.563 [2024-12-05 13:44:38.130220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.563 [2024-12-05 13:44:38.143862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.563 [2024-12-05 13:44:38.143879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.157533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.157550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.171172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.171189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.184755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.184773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.198046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.198065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.211657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.211676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.224927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.224952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.238389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.238409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.252014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.252033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.265800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.265819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.279299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.279317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.293063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.293081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.306654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.306672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.315486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.315504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.329676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.329695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.343759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.343777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.357833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.357851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.371969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.371987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.382265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.382283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.821 [2024-12-05 13:44:38.395970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.821 [2024-12-05 13:44:38.395987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.080 [2024-12-05 13:44:38.410296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.080 [2024-12-05 13:44:38.410313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.080 [2024-12-05 13:44:38.425488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.080 [2024-12-05 13:44:38.425508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.439701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.439720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.453474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.453493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.467244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.467263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.480744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.480767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.494497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.494515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.507689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.507708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.517097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.517116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.530927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.530945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.539716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.539734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.553474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.553492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.566823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.566842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.576157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.576175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.590039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.590058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.598904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.598921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.612871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.612889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.626046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.626064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.639875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.639892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.081 [2024-12-05 13:44:38.653683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.081 [2024-12-05 13:44:38.653701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.667278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.667296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.680743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.680761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.694599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.694617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.707965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.707983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.720861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.720878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.734506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.734524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.748501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.748519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.757322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.757339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.766550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.766568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.775990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.776007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.785652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.785671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.799907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.799926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.813119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.813137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.826611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.826629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.840029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.840046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.853216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.853235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.866346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.866364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.880108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.880126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.893388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.893407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.340 [2024-12-05 13:44:38.902577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.340 [2024-12-05 13:44:38.902595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.341 [2024-12-05 13:44:38.916624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.341 [2024-12-05 13:44:38.916643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:38.930411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:38.930430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:38.943678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:38.943696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 17168.00 IOPS, 134.12 MiB/s [2024-12-05T12:44:39.187Z] [2024-12-05 13:44:38.957272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:38.957291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:38.966038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:38.966059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:38.980238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:38.980256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:38.993451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:38.993468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:39.002223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:39.002241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:39.011503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:39.011520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:39.020729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:39.020747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:39.035521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:39.035539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:39.045706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:39.045723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:39.055004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:39.055021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:39.069140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:39.069158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:39.082338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:39.082356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:39.096161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.600 [2024-12-05 13:44:39.096180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.600 [2024-12-05 13:44:39.109981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.601 [2024-12-05 13:44:39.109998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.601 [2024-12-05 13:44:39.123704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.601 [2024-12-05 13:44:39.123722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.601 [2024-12-05 13:44:39.137501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.601 [2024-12-05 13:44:39.137519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.601 [2024-12-05 13:44:39.150617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.601 [2024-12-05 13:44:39.150635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.601 [2024-12-05 13:44:39.164241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.601 [2024-12-05 13:44:39.164259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.601 [2024-12-05 13:44:39.177988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.601 [2024-12-05 13:44:39.178010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.191065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.191083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.204849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.204867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.218258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.218276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.231548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.231565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.245232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.245250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.258862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.258879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.272461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.272479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.285813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.285831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.299408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.299425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.312770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.312787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.326035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.326053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.339433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.339450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.353188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.353206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.362242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.362259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.376254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.376271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.389432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.389450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.402884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.402901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.415919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.415937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.429628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.429650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.861 [2024-12-05 13:44:39.443667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.861 [2024-12-05 13:44:39.443685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.457358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.457382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.470844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.470861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.484506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.484523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.497619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.497637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.510976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.510993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.524136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.524153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.532963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.532980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.546635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.546652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.559884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.559901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.573987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.574005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.584915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.584934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.594519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.594539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.608692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.608712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.621821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.621841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.635848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.635868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.649727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.649746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.663555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.663573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.676633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.676655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.689757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.689775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.124 [2024-12-05 13:44:39.703242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.124 [2024-12-05 13:44:39.703260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.716629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.716649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.730423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.730441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.743985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.744002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.757680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.757698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.771232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.771250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.785329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.785347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.798934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.798954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.812815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.812834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.826279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.826297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.840157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.840175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.853704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.853739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.867320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.867339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.876452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.876470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.890343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.890361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.903921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.903940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.917978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.917996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.931447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.931470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.940987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.941006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 [2024-12-05 13:44:39.954806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.954824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.389 17195.00 IOPS, 134.34 MiB/s [2024-12-05T12:44:39.976Z] [2024-12-05 13:44:39.968575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.389 [2024-12-05 13:44:39.968593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:39.982445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:39.982465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:39.996143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:39.996162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.006246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.006265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.020204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.020223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.028859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.028878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.038229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.038247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.047248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.047265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.056233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.056268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.071605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.071624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.085275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.085293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.099518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.099536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.112764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.112782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.126675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.126693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.140384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.140402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.154176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.154194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.167619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.167637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.181854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.181873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.195053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.195071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.209155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.209173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.649 [2024-12-05 13:44:40.222877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.649 [2024-12-05 13:44:40.222895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.237001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.237019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.247649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.247667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.261464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.261482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.275350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.275375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.289059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.289077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.303255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.303274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.314854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.314871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.329380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.329398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.342965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.342982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.356797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.356815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.370598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.370615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.384437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.384455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.398335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.398353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.411963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.411980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.426049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.426069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.439691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.439710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.453633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.453651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.467214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.467231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.481047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.481064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.910 [2024-12-05 13:44:40.494935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.910 [2024-12-05 13:44:40.494953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.169 [2024-12-05 13:44:40.508724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.169 [2024-12-05 13:44:40.508742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.169 [2024-12-05 13:44:40.522534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.169 [2024-12-05 13:44:40.522551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.169 [2024-12-05 13:44:40.535901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.169 [2024-12-05 13:44:40.535919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.169 [2024-12-05 13:44:40.549793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.169 [2024-12-05 13:44:40.549810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.169 [2024-12-05 13:44:40.563140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.169 [2024-12-05 13:44:40.563158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.169 [2024-12-05 13:44:40.577019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.169 [2024-12-05 13:44:40.577037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.169 [2024-12-05 13:44:40.590402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.169 [2024-12-05 13:44:40.590421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.169 [2024-12-05 13:44:40.604429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.170 [2024-12-05 13:44:40.604448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.170 [2024-12-05 13:44:40.617758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.170 [2024-12-05 13:44:40.617776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.170 [2024-12-05 13:44:40.631426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.170 [2024-12-05 13:44:40.631444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.170 [2024-12-05 13:44:40.644987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.170 [2024-12-05 13:44:40.645004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.170 [2024-12-05 13:44:40.659200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.170 [2024-12-05 13:44:40.659218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.170 [2024-12-05 13:44:40.670078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.170 [2024-12-05 13:44:40.670096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.170 [2024-12-05 13:44:40.684192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.170 [2024-12-05 13:44:40.684210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.170 [2024-12-05 13:44:40.697515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.170 [2024-12-05 13:44:40.697532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.170 [2024-12-05 13:44:40.711275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.170 [2024-12-05 13:44:40.711293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.170 [2024-12-05 13:44:40.724608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.170 [2024-12-05 13:44:40.724626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.170 [2024-12-05 13:44:40.738579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.170 [2024-12-05 13:44:40.738596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.170 [2024-12-05 13:44:40.751943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.170 [2024-12-05 13:44:40.751961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.765535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.765553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.778880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.778898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.792399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.792417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.806489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.806507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.820313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.820331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.833778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.833796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.847523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.847540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.861381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.861400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.875142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.875159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.888717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.888735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.902135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.902153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.915577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.915594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.929146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.929168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.942580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.942597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.956457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.956475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 17169.40 IOPS, 134.14 MiB/s 00:12:58.430 Latency(us) 00:12:58.430 [2024-12-05T12:44:41.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.430 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:58.430 Nvme1n1 : 5.01 17172.00 134.16 0.00 0.00 7447.17 2995.93 17476.27 00:12:58.430 [2024-12-05T12:44:41.017Z] =================================================================================================================== 00:12:58.430 [2024-12-05T12:44:41.017Z] Total : 17172.00 134.16 0.00 0.00 7447.17 2995.93 17476.27 00:12:58.430 [2024-12-05 13:44:40.966562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.966580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.978590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.978605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:40.990626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:40.990641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:41.002663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:41.002681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.430 [2024-12-05 13:44:41.014687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.430 [2024-12-05 13:44:41.014703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.689 [2024-12-05 13:44:41.026719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.689 [2024-12-05 13:44:41.026734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.689 [2024-12-05 13:44:41.038751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.689 [2024-12-05 13:44:41.038765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.689 [2024-12-05 13:44:41.050782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.689 [2024-12-05 13:44:41.050796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.689 [2024-12-05 13:44:41.062813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.689 [2024-12-05 13:44:41.062827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.689 [2024-12-05 13:44:41.074844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.689 [2024-12-05 13:44:41.074857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.689 [2024-12-05 13:44:41.086874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.689 [2024-12-05 13:44:41.086884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.689 [2024-12-05 13:44:41.098910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.689 [2024-12-05 13:44:41.098921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.689 [2024-12-05 13:44:41.110939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.689 [2024-12-05 13:44:41.110950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.689 [2024-12-05 13:44:41.122972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.689 [2024-12-05 13:44:41.122987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (543390) - No such process 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 543390 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.689 delay0 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.689 13:44:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:58.689 [2024-12-05 13:44:41.273711] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:05.254 Initializing NVMe Controllers 00:13:05.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:05.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:05.254 Initialization complete. Launching workers. 00:13:05.254 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 885 00:13:05.254 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1163, failed to submit 42 00:13:05.254 success 991, unsuccessful 172, failed 0 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:05.254 rmmod nvme_tcp 00:13:05.254 rmmod nvme_fabrics 00:13:05.254 rmmod nvme_keyring 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 541562 ']' 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 541562 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 541562 ']' 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 541562 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 541562 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 541562' 00:13:05.254 killing process with pid 541562 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 541562 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 541562 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.254 13:44:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.800 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:07.800 00:13:07.800 real 0m31.494s 00:13:07.800 user 0m42.127s 00:13:07.800 sys 0m11.104s 00:13:07.800 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.801 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.801 ************************************ 00:13:07.801 END TEST nvmf_zcopy 00:13:07.801 ************************************ 00:13:07.801 13:44:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:07.801 13:44:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:07.801 13:44:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.801 13:44:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:07.801 ************************************ 00:13:07.801 START TEST nvmf_nmic 00:13:07.801 ************************************ 00:13:07.801 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:07.801 * Looking for test storage... 00:13:07.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.801 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:07.801 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:13:07.801 13:44:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:07.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.801 --rc genhtml_branch_coverage=1 00:13:07.801 --rc genhtml_function_coverage=1 00:13:07.801 --rc genhtml_legend=1 00:13:07.801 --rc geninfo_all_blocks=1 00:13:07.801 --rc geninfo_unexecuted_blocks=1 00:13:07.801 00:13:07.801 ' 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:07.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.801 --rc genhtml_branch_coverage=1 00:13:07.801 --rc genhtml_function_coverage=1 00:13:07.801 --rc genhtml_legend=1 00:13:07.801 --rc geninfo_all_blocks=1 00:13:07.801 --rc geninfo_unexecuted_blocks=1 00:13:07.801 00:13:07.801 ' 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:07.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.801 --rc genhtml_branch_coverage=1 00:13:07.801 --rc genhtml_function_coverage=1 00:13:07.801 --rc genhtml_legend=1 00:13:07.801 --rc geninfo_all_blocks=1 00:13:07.801 --rc geninfo_unexecuted_blocks=1 00:13:07.801 00:13:07.801 ' 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:07.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.801 --rc genhtml_branch_coverage=1 00:13:07.801 --rc genhtml_function_coverage=1 00:13:07.801 --rc genhtml_legend=1 00:13:07.801 --rc geninfo_all_blocks=1 00:13:07.801 --rc geninfo_unexecuted_blocks=1 00:13:07.801 00:13:07.801 ' 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.801 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.802 13:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:14.367 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:14.367 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:14.367 Found net devices under 0000:86:00.0: cvl_0_0 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:14.367 Found net devices under 0000:86:00.1: cvl_0_1 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:14.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:13:14.367 00:13:14.367 --- 10.0.0.2 ping statistics --- 00:13:14.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.367 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:13:14.367 00:13:14.367 --- 10.0.0.1 ping statistics --- 00:13:14.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.367 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.367 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:13:14.368 13:44:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=548930 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 548930 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 548930 ']' 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.368 [2024-12-05 13:44:56.095001] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:13:14.368 [2024-12-05 13:44:56.095051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.368 [2024-12-05 13:44:56.171660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.368 [2024-12-05 13:44:56.215414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.368 [2024-12-05 13:44:56.215451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.368 [2024-12-05 13:44:56.215458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.368 [2024-12-05 13:44:56.215464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.368 [2024-12-05 13:44:56.215470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.368 [2024-12-05 13:44:56.217070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.368 [2024-12-05 13:44:56.217186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.368 [2024-12-05 13:44:56.217295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.368 [2024-12-05 13:44:56.217296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.368 [2024-12-05 13:44:56.354955] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.368 Malloc0 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.368 [2024-12-05 13:44:56.419508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:14.368 test case1: single bdev can't be used in multiple subsystems 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.368 [2024-12-05 13:44:56.447397] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:14.368 [2024-12-05 13:44:56.447416] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:14.368 [2024-12-05 13:44:56.447423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.368 request: 00:13:14.368 { 00:13:14.368 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:14.368 "namespace": { 00:13:14.368 "bdev_name": "Malloc0", 00:13:14.368 "no_auto_visible": false, 00:13:14.368 "hide_metadata": false 00:13:14.368 }, 00:13:14.368 "method": "nvmf_subsystem_add_ns", 00:13:14.368 "req_id": 1 00:13:14.368 } 00:13:14.368 Got JSON-RPC error response 00:13:14.368 response: 00:13:14.368 { 00:13:14.368 "code": -32602, 00:13:14.368 "message": "Invalid parameters" 00:13:14.368 } 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:14.368 Adding namespace failed - expected result. 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:14.368 test case2: host connect to nvmf target in multiple paths 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.368 [2024-12-05 13:44:56.459546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.368 13:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.304 13:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:16.239 13:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.239 13:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:13:16.239 13:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.239 13:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:16.239 13:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:13:18.774 13:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:18.774 13:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:18.774 13:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.774 13:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:18.774 13:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.774 13:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:13:18.774 13:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:18.774 [global] 00:13:18.774 thread=1 00:13:18.774 invalidate=1 00:13:18.774 rw=write 00:13:18.774 time_based=1 00:13:18.774 runtime=1 00:13:18.774 ioengine=libaio 00:13:18.774 direct=1 00:13:18.774 bs=4096 00:13:18.774 iodepth=1 00:13:18.774 norandommap=0 00:13:18.774 numjobs=1 00:13:18.774 00:13:18.774 verify_dump=1 00:13:18.774 verify_backlog=512 00:13:18.774 verify_state_save=0 00:13:18.774 do_verify=1 00:13:18.774 verify=crc32c-intel 00:13:18.774 [job0] 00:13:18.774 filename=/dev/nvme0n1 00:13:18.774 Could not set queue depth (nvme0n1) 00:13:18.774 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:18.774 fio-3.35 00:13:18.774 Starting 1 thread 00:13:19.710 00:13:19.710 job0: (groupid=0, jobs=1): err= 0: pid=550066: Thu Dec 5 13:45:02 2024 00:13:19.710 read: IOPS=153, BW=613KiB/s (628kB/s)(620KiB/1011msec) 00:13:19.710 slat (nsec): min=3784, max=25002, avg=5432.29, stdev=2900.39 00:13:19.710 clat (usec): min=189, max=42801, avg=6053.57, stdev=14396.05 00:13:19.710 lat (usec): min=193, max=42811, avg=6059.01, stdev=14398.17 00:13:19.710 clat percentiles (usec): 00:13:19.710 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 210], 00:13:19.710 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 221], 00:13:19.710 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[41157], 95.00th=[41157], 00:13:19.710 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:13:19.710 | 99.99th=[42730] 00:13:19.711 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:13:19.711 slat (nsec): min=4392, max=43077, avg=5602.39, stdev=1748.64 00:13:19.711 clat (usec): min=110, max=361, avg=131.66, stdev=17.54 00:13:19.711 lat (usec): min=116, max=404, avg=137.27, stdev=18.60 00:13:19.711 clat percentiles (usec): 00:13:19.711 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 122], 20.00th=[ 124], 00:13:19.711 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 129], 00:13:19.711 | 70.00th=[ 131], 80.00th=[ 133], 90.00th=[ 151], 95.00th=[ 165], 00:13:19.711 | 99.00th=[ 180], 99.50th=[ 219], 99.90th=[ 363], 99.95th=[ 363], 00:13:19.711 | 99.99th=[ 363] 00:13:19.711 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:19.711 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:19.711 lat (usec) : 250=95.80%, 500=0.90% 00:13:19.711 lat (msec) : 50=3.30% 00:13:19.711 cpu : usr=0.10%, sys=0.40%, ctx=670, majf=0, minf=1 00:13:19.711 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:19.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.711 issued rwts: total=155,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:19.711 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:19.711 00:13:19.711 Run status group 0 (all jobs): 00:13:19.711 READ: bw=613KiB/s (628kB/s), 613KiB/s-613KiB/s (628kB/s-628kB/s), io=620KiB (635kB), run=1011-1011msec 00:13:19.711 WRITE: bw=2026KiB/s (2074kB/s), 2026KiB/s-2026KiB/s (2074kB/s-2074kB/s), io=2048KiB (2097kB), run=1011-1011msec 00:13:19.711 00:13:19.711 Disk stats (read/write): 00:13:19.711 nvme0n1: ios=207/512, merge=0/0, ticks=1500/66, in_queue=1566, util=99.40% 00:13:19.711 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:19.970 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.970 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:19.971 rmmod nvme_tcp 00:13:19.971 rmmod nvme_fabrics 00:13:19.971 rmmod nvme_keyring 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 548930 ']' 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 548930 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 548930 ']' 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 548930 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.971 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 548930 00:13:20.230 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 548930' 00:13:20.231 killing process with pid 548930 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 548930 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 548930 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.231 13:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.767 13:45:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:22.767 00:13:22.767 real 0m14.984s 00:13:22.767 user 0m32.964s 00:13:22.767 sys 0m5.191s 00:13:22.767 13:45:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.767 13:45:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:22.767 ************************************ 00:13:22.767 END TEST nvmf_nmic 00:13:22.767 ************************************ 00:13:22.767 13:45:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:22.767 13:45:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.767 13:45:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.767 13:45:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:22.767 ************************************ 00:13:22.767 START TEST nvmf_fio_target 00:13:22.767 ************************************ 00:13:22.767 13:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:22.767 * Looking for test storage... 00:13:22.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:22.767 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:22.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.768 --rc genhtml_branch_coverage=1 00:13:22.768 --rc genhtml_function_coverage=1 00:13:22.768 --rc genhtml_legend=1 00:13:22.768 --rc geninfo_all_blocks=1 00:13:22.768 --rc geninfo_unexecuted_blocks=1 00:13:22.768 00:13:22.768 ' 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:22.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.768 --rc genhtml_branch_coverage=1 00:13:22.768 --rc genhtml_function_coverage=1 00:13:22.768 --rc genhtml_legend=1 00:13:22.768 --rc geninfo_all_blocks=1 00:13:22.768 --rc geninfo_unexecuted_blocks=1 00:13:22.768 00:13:22.768 ' 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:22.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.768 --rc genhtml_branch_coverage=1 00:13:22.768 --rc genhtml_function_coverage=1 00:13:22.768 --rc genhtml_legend=1 00:13:22.768 --rc geninfo_all_blocks=1 00:13:22.768 --rc geninfo_unexecuted_blocks=1 00:13:22.768 00:13:22.768 ' 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:22.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.768 --rc genhtml_branch_coverage=1 00:13:22.768 --rc genhtml_function_coverage=1 00:13:22.768 --rc genhtml_legend=1 00:13:22.768 --rc geninfo_all_blocks=1 00:13:22.768 --rc geninfo_unexecuted_blocks=1 00:13:22.768 00:13:22.768 ' 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:22.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:22.768 13:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.339 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.339 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:29.339 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:29.339 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:29.339 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:29.339 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:29.339 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:29.339 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:29.339 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:29.339 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:29.340 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:29.340 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:29.340 Found net devices under 0000:86:00.0: cvl_0_0 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:29.340 Found net devices under 0000:86:00.1: cvl_0_1 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:29.340 13:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.340 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.340 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.340 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:29.340 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:29.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:13:29.340 00:13:29.340 --- 10.0.0.2 ping statistics --- 00:13:29.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.340 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:13:29.340 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:13:29.340 00:13:29.340 --- 10.0.0.1 ping statistics --- 00:13:29.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.340 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:13:29.340 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.340 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:13:29.340 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:29.340 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.340 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:29.340 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:29.340 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.340 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=554286 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 554286 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 554286 ']' 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.341 13:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.341 [2024-12-05 13:45:11.156428] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:13:29.341 [2024-12-05 13:45:11.156479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.341 [2024-12-05 13:45:11.245560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.341 [2024-12-05 13:45:11.287295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.341 [2024-12-05 13:45:11.287333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.341 [2024-12-05 13:45:11.287340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.341 [2024-12-05 13:45:11.287346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.341 [2024-12-05 13:45:11.287350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.341 [2024-12-05 13:45:11.288793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.341 [2024-12-05 13:45:11.288905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.341 [2024-12-05 13:45:11.289011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.341 [2024-12-05 13:45:11.289013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.598 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.598 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:13:29.598 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:29.598 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:29.599 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.599 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.599 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:29.856 [2024-12-05 13:45:12.213863] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.856 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:30.114 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:30.114 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:30.114 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:30.114 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:30.372 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:30.373 13:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:30.631 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:30.631 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:30.888 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:31.147 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:31.147 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:31.147 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:31.147 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:31.406 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:31.406 13:45:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:31.667 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:31.926 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:31.926 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:32.184 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:32.184 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:32.184 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.442 [2024-12-05 13:45:14.900072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.442 13:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:32.700 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:32.957 13:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.889 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:33.889 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:13:33.889 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.889 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:13:33.889 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:13:33.889 13:45:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:13:36.416 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:36.416 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:36.416 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.416 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:13:36.416 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.416 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:13:36.416 13:45:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:36.416 [global] 00:13:36.416 thread=1 00:13:36.416 invalidate=1 00:13:36.416 rw=write 00:13:36.416 time_based=1 00:13:36.416 runtime=1 00:13:36.416 ioengine=libaio 00:13:36.416 direct=1 00:13:36.416 bs=4096 00:13:36.416 iodepth=1 00:13:36.416 norandommap=0 00:13:36.416 numjobs=1 00:13:36.416 00:13:36.416 verify_dump=1 00:13:36.416 verify_backlog=512 00:13:36.416 verify_state_save=0 00:13:36.416 do_verify=1 00:13:36.416 verify=crc32c-intel 00:13:36.416 [job0] 00:13:36.416 filename=/dev/nvme0n1 00:13:36.416 [job1] 00:13:36.416 filename=/dev/nvme0n2 00:13:36.416 [job2] 00:13:36.416 filename=/dev/nvme0n3 00:13:36.416 [job3] 00:13:36.416 filename=/dev/nvme0n4 00:13:36.416 Could not set queue depth (nvme0n1) 00:13:36.416 Could not set queue depth (nvme0n2) 00:13:36.416 Could not set queue depth (nvme0n3) 00:13:36.416 Could not set queue depth (nvme0n4) 00:13:36.416 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.416 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.416 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.416 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.416 fio-3.35 00:13:36.416 Starting 4 threads 00:13:37.814 00:13:37.814 job0: (groupid=0, jobs=1): err= 0: pid=555802: Thu Dec 5 13:45:19 2024 00:13:37.814 read: IOPS=2251, BW=9007KiB/s (9223kB/s)(9016KiB/1001msec) 00:13:37.814 slat (nsec): min=8417, max=53388, avg=9648.26, stdev=1662.94 00:13:37.814 clat (usec): min=162, max=1337, avg=224.62, stdev=36.28 00:13:37.814 lat (usec): min=190, max=1346, avg=234.26, stdev=36.31 00:13:37.814 clat percentiles (usec): 00:13:37.814 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 206], 00:13:37.814 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:13:37.814 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 262], 00:13:37.814 | 99.00th=[ 322], 99.50th=[ 396], 99.90th=[ 586], 99.95th=[ 685], 00:13:37.814 | 99.99th=[ 1336] 00:13:37.814 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:37.814 slat (nsec): min=12069, max=40166, avg=13564.32, stdev=1868.15 00:13:37.814 clat (usec): min=130, max=926, avg=164.60, stdev=27.70 00:13:37.814 lat (usec): min=143, max=938, avg=178.16, stdev=27.92 00:13:37.814 clat percentiles (usec): 00:13:37.814 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 153], 00:13:37.814 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:13:37.814 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 190], 00:13:37.814 | 99.00th=[ 212], 99.50th=[ 223], 99.90th=[ 660], 99.95th=[ 922], 00:13:37.814 | 99.99th=[ 930] 00:13:37.814 bw ( KiB/s): min=11104, max=11104, per=39.11%, avg=11104.00, stdev= 0.00, samples=1 00:13:37.814 iops : min= 2776, max= 2776, avg=2776.00, stdev= 0.00, samples=1 00:13:37.814 lat (usec) : 250=96.18%, 500=3.70%, 750=0.06%, 1000=0.04% 00:13:37.814 lat (msec) : 2=0.02% 00:13:37.814 cpu : usr=4.30%, sys=8.70%, ctx=4815, majf=0, minf=1 00:13:37.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.814 issued rwts: total=2254,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.814 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.814 job1: (groupid=0, jobs=1): err= 0: pid=555819: Thu Dec 5 13:45:19 2024 00:13:37.814 read: IOPS=1162, BW=4651KiB/s (4763kB/s)(4656KiB/1001msec) 00:13:37.814 slat (nsec): min=8459, max=24656, avg=9678.70, stdev=1775.45 00:13:37.814 clat (usec): min=188, max=41036, avg=604.95, stdev=3916.73 00:13:37.814 lat (usec): min=197, max=41059, avg=614.63, stdev=3917.88 00:13:37.814 clat percentiles (usec): 00:13:37.814 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:13:37.814 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:13:37.814 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 260], 00:13:37.814 | 99.00th=[ 310], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:37.814 | 99.99th=[41157] 00:13:37.814 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:37.814 slat (nsec): min=11938, max=55344, avg=13546.56, stdev=2097.53 00:13:37.814 clat (usec): min=135, max=277, avg=166.09, stdev=14.67 00:13:37.814 lat (usec): min=147, max=290, avg=179.64, stdev=15.13 00:13:37.814 clat percentiles (usec): 00:13:37.814 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:13:37.814 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:13:37.814 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 192], 00:13:37.814 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 255], 99.95th=[ 277], 00:13:37.814 | 99.99th=[ 277] 00:13:37.815 bw ( KiB/s): min= 4096, max= 4096, per=14.43%, avg=4096.00, stdev= 0.00, samples=1 00:13:37.815 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:37.815 lat (usec) : 250=96.74%, 500=2.85% 00:13:37.815 lat (msec) : 50=0.41% 00:13:37.815 cpu : usr=2.30%, sys=4.90%, ctx=2702, majf=0, minf=1 00:13:37.815 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.815 issued rwts: total=1164,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.815 job2: (groupid=0, jobs=1): err= 0: pid=555838: Thu Dec 5 13:45:19 2024 00:13:37.815 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:13:37.815 slat (nsec): min=3474, max=25015, avg=21664.73, stdev=5216.84 00:13:37.815 clat (usec): min=40615, max=41010, avg=40954.17, stdev=80.34 00:13:37.815 lat (usec): min=40618, max=41031, avg=40975.83, stdev=83.89 00:13:37.815 clat percentiles (usec): 00:13:37.815 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:37.815 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:37.815 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:37.815 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:37.815 | 99.99th=[41157] 00:13:37.815 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:13:37.815 slat (usec): min=3, max=24946, avg=54.15, stdev=1102.27 00:13:37.815 clat (usec): min=111, max=981, avg=156.02, stdev=57.35 00:13:37.815 lat (usec): min=115, max=25880, avg=210.17, stdev=1137.69 00:13:37.815 clat percentiles (usec): 00:13:37.815 | 1.00th=[ 121], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 139], 00:13:37.815 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:13:37.815 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 180], 00:13:37.815 | 99.00th=[ 281], 99.50th=[ 570], 99.90th=[ 979], 99.95th=[ 979], 00:13:37.815 | 99.99th=[ 979] 00:13:37.815 bw ( KiB/s): min= 4096, max= 4096, per=14.43%, avg=4096.00, stdev= 0.00, samples=1 00:13:37.815 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:37.815 lat (usec) : 250=94.57%, 500=0.75%, 750=0.19%, 1000=0.37% 00:13:37.815 lat (msec) : 50=4.12% 00:13:37.815 cpu : usr=0.10%, sys=0.50%, ctx=536, majf=0, minf=1 00:13:37.815 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.815 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.815 job3: (groupid=0, jobs=1): err= 0: pid=555844: Thu Dec 5 13:45:19 2024 00:13:37.815 read: IOPS=2314, BW=9259KiB/s (9481kB/s)(9268KiB/1001msec) 00:13:37.815 slat (nsec): min=7336, max=37486, avg=8637.78, stdev=1400.37 00:13:37.815 clat (usec): min=172, max=901, avg=218.52, stdev=27.68 00:13:37.815 lat (usec): min=181, max=913, avg=227.16, stdev=27.91 00:13:37.815 clat percentiles (usec): 00:13:37.815 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:13:37.815 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:13:37.815 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 258], 00:13:37.815 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 396], 99.95th=[ 494], 00:13:37.815 | 99.99th=[ 906] 00:13:37.815 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:37.815 slat (nsec): min=5130, max=44738, avg=12228.74, stdev=2072.28 00:13:37.815 clat (usec): min=124, max=304, avg=167.08, stdev=19.79 00:13:37.815 lat (usec): min=135, max=320, avg=179.31, stdev=20.14 00:13:37.815 clat percentiles (usec): 00:13:37.815 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:13:37.815 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:13:37.815 | 70.00th=[ 174], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 202], 00:13:37.815 | 99.00th=[ 223], 99.50th=[ 258], 99.90th=[ 285], 99.95th=[ 302], 00:13:37.815 | 99.99th=[ 306] 00:13:37.815 bw ( KiB/s): min=11800, max=11800, per=41.57%, avg=11800.00, stdev= 0.00, samples=1 00:13:37.815 iops : min= 2950, max= 2950, avg=2950.00, stdev= 0.00, samples=1 00:13:37.815 lat (usec) : 250=95.04%, 500=4.94%, 1000=0.02% 00:13:37.815 cpu : usr=5.50%, sys=6.60%, ctx=4879, majf=0, minf=1 00:13:37.815 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.815 issued rwts: total=2317,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.815 00:13:37.815 Run status group 0 (all jobs): 00:13:37.815 READ: bw=22.3MiB/s (23.3MB/s), 87.1KiB/s-9259KiB/s (89.2kB/s-9481kB/s), io=22.5MiB (23.6MB), run=1001-1010msec 00:13:37.815 WRITE: bw=27.7MiB/s (29.1MB/s), 2028KiB/s-9.99MiB/s (2076kB/s-10.5MB/s), io=28.0MiB (29.4MB), run=1001-1010msec 00:13:37.815 00:13:37.815 Disk stats (read/write): 00:13:37.815 nvme0n1: ios=2073/2048, merge=0/0, ticks=707/311, in_queue=1018, util=85.87% 00:13:37.815 nvme0n2: ios=979/1024, merge=0/0, ticks=855/154, in_queue=1009, util=89.84% 00:13:37.815 nvme0n3: ios=67/512, merge=0/0, ticks=961/77, in_queue=1038, util=94.58% 00:13:37.815 nvme0n4: ios=2072/2124, merge=0/0, ticks=1326/342, in_queue=1668, util=94.33% 00:13:37.815 13:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:37.815 [global] 00:13:37.815 thread=1 00:13:37.815 invalidate=1 00:13:37.815 rw=randwrite 00:13:37.815 time_based=1 00:13:37.815 runtime=1 00:13:37.815 ioengine=libaio 00:13:37.815 direct=1 00:13:37.815 bs=4096 00:13:37.815 iodepth=1 00:13:37.815 norandommap=0 00:13:37.815 numjobs=1 00:13:37.815 00:13:37.815 verify_dump=1 00:13:37.815 verify_backlog=512 00:13:37.815 verify_state_save=0 00:13:37.815 do_verify=1 00:13:37.815 verify=crc32c-intel 00:13:37.815 [job0] 00:13:37.815 filename=/dev/nvme0n1 00:13:37.815 [job1] 00:13:37.815 filename=/dev/nvme0n2 00:13:37.815 [job2] 00:13:37.815 filename=/dev/nvme0n3 00:13:37.815 [job3] 00:13:37.815 filename=/dev/nvme0n4 00:13:37.815 Could not set queue depth (nvme0n1) 00:13:37.815 Could not set queue depth (nvme0n2) 00:13:37.815 Could not set queue depth (nvme0n3) 00:13:37.815 Could not set queue depth (nvme0n4) 00:13:37.815 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:37.815 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:37.815 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:37.815 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:37.815 fio-3.35 00:13:37.815 Starting 4 threads 00:13:39.192 00:13:39.192 job0: (groupid=0, jobs=1): err= 0: pid=556239: Thu Dec 5 13:45:21 2024 00:13:39.192 read: IOPS=22, BW=91.5KiB/s (93.6kB/s)(92.0KiB/1006msec) 00:13:39.192 slat (nsec): min=10334, max=28875, avg=22204.70, stdev=4201.30 00:13:39.192 clat (usec): min=231, max=44007, avg=39310.54, stdev=8543.28 00:13:39.192 lat (usec): min=253, max=44036, avg=39332.74, stdev=8543.44 00:13:39.192 clat percentiles (usec): 00:13:39.192 | 1.00th=[ 231], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:13:39.192 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:39.192 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:39.192 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:13:39.192 | 99.99th=[43779] 00:13:39.192 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:13:39.192 slat (nsec): min=10237, max=41381, avg=11793.95, stdev=2485.60 00:13:39.192 clat (usec): min=136, max=400, avg=181.59, stdev=22.29 00:13:39.192 lat (usec): min=147, max=441, avg=193.39, stdev=23.02 00:13:39.192 clat percentiles (usec): 00:13:39.192 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 00:13:39.192 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:13:39.192 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 210], 00:13:39.192 | 99.00th=[ 239], 99.50th=[ 255], 99.90th=[ 400], 99.95th=[ 400], 00:13:39.192 | 99.99th=[ 400] 00:13:39.192 bw ( KiB/s): min= 4096, max= 4096, per=23.70%, avg=4096.00, stdev= 0.00, samples=1 00:13:39.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:39.192 lat (usec) : 250=95.14%, 500=0.75% 00:13:39.192 lat (msec) : 50=4.11% 00:13:39.192 cpu : usr=0.60%, sys=0.80%, ctx=538, majf=0, minf=1 00:13:39.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.192 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.192 job1: (groupid=0, jobs=1): err= 0: pid=556240: Thu Dec 5 13:45:21 2024 00:13:39.192 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:39.192 slat (nsec): min=7114, max=40566, avg=8239.48, stdev=1580.94 00:13:39.192 clat (usec): min=156, max=545, avg=206.72, stdev=37.32 00:13:39.192 lat (usec): min=164, max=552, avg=214.96, stdev=37.40 00:13:39.192 clat percentiles (usec): 00:13:39.192 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:13:39.192 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 200], 00:13:39.192 | 70.00th=[ 229], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 262], 00:13:39.192 | 99.00th=[ 326], 99.50th=[ 359], 99.90th=[ 482], 99.95th=[ 486], 00:13:39.192 | 99.99th=[ 545] 00:13:39.192 write: IOPS=2838, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec); 0 zone resets 00:13:39.192 slat (nsec): min=10516, max=52885, avg=11813.82, stdev=1947.22 00:13:39.192 clat (usec): min=102, max=319, avg=140.96, stdev=16.57 00:13:39.192 lat (usec): min=121, max=356, avg=152.78, stdev=16.83 00:13:39.192 clat percentiles (usec): 00:13:39.192 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 128], 00:13:39.193 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:13:39.193 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 174], 00:13:39.193 | 99.00th=[ 192], 99.50th=[ 202], 99.90th=[ 229], 99.95th=[ 251], 00:13:39.193 | 99.99th=[ 322] 00:13:39.193 bw ( KiB/s): min=12288, max=12288, per=71.10%, avg=12288.00, stdev= 0.00, samples=1 00:13:39.193 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:39.193 lat (usec) : 250=93.08%, 500=6.91%, 750=0.02% 00:13:39.193 cpu : usr=3.70%, sys=9.30%, ctx=5402, majf=0, minf=1 00:13:39.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.193 issued rwts: total=2560,2841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.193 job2: (groupid=0, jobs=1): err= 0: pid=556241: Thu Dec 5 13:45:21 2024 00:13:39.193 read: IOPS=474, BW=1897KiB/s (1942kB/s)(1908KiB/1006msec) 00:13:39.193 slat (nsec): min=6889, max=30113, avg=8369.35, stdev=3362.49 00:13:39.193 clat (usec): min=192, max=42009, avg=1882.32, stdev=8024.30 00:13:39.193 lat (usec): min=199, max=42032, avg=1890.69, stdev=8027.27 00:13:39.193 clat percentiles (usec): 00:13:39.193 | 1.00th=[ 206], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 235], 00:13:39.193 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:13:39.193 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 375], 00:13:39.193 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:39.193 | 99.99th=[42206] 00:13:39.193 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:13:39.193 slat (nsec): min=9465, max=38163, avg=10768.27, stdev=2014.15 00:13:39.193 clat (usec): min=121, max=350, avg=186.57, stdev=22.19 00:13:39.193 lat (usec): min=133, max=388, avg=197.34, stdev=22.93 00:13:39.193 clat percentiles (usec): 00:13:39.193 | 1.00th=[ 139], 5.00th=[ 151], 10.00th=[ 163], 20.00th=[ 172], 00:13:39.193 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 190], 00:13:39.193 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 219], 00:13:39.193 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 351], 99.95th=[ 351], 00:13:39.193 | 99.99th=[ 351] 00:13:39.193 bw ( KiB/s): min= 4096, max= 4096, per=23.70%, avg=4096.00, stdev= 0.00, samples=1 00:13:39.193 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:39.193 lat (usec) : 250=77.05%, 500=21.03% 00:13:39.193 lat (msec) : 50=1.92% 00:13:39.193 cpu : usr=0.50%, sys=1.00%, ctx=991, majf=0, minf=1 00:13:39.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.193 issued rwts: total=477,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.193 job3: (groupid=0, jobs=1): err= 0: pid=556242: Thu Dec 5 13:45:21 2024 00:13:39.193 read: IOPS=47, BW=190KiB/s (194kB/s)(192KiB/1013msec) 00:13:39.193 slat (nsec): min=7410, max=23127, avg=10611.60, stdev=4048.51 00:13:39.193 clat (usec): min=209, max=42298, avg=19117.51, stdev=20753.39 00:13:39.193 lat (usec): min=217, max=42306, avg=19128.12, stdev=20752.33 00:13:39.193 clat percentiles (usec): 00:13:39.193 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 219], 20.00th=[ 225], 00:13:39.193 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[40633], 00:13:39.193 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:13:39.193 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:39.193 | 99.99th=[42206] 00:13:39.193 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:13:39.193 slat (nsec): min=9744, max=35988, avg=11277.26, stdev=1868.66 00:13:39.193 clat (usec): min=142, max=372, avg=170.15, stdev=17.83 00:13:39.193 lat (usec): min=153, max=405, avg=181.42, stdev=18.95 00:13:39.193 clat percentiles (usec): 00:13:39.193 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:13:39.193 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:13:39.193 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 192], 00:13:39.193 | 99.00th=[ 208], 99.50th=[ 241], 99.90th=[ 371], 99.95th=[ 371], 00:13:39.193 | 99.99th=[ 371] 00:13:39.193 bw ( KiB/s): min= 4096, max= 4096, per=23.70%, avg=4096.00, stdev= 0.00, samples=1 00:13:39.193 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:39.193 lat (usec) : 250=95.54%, 500=0.54% 00:13:39.193 lat (msec) : 50=3.93% 00:13:39.193 cpu : usr=0.49%, sys=0.30%, ctx=561, majf=0, minf=1 00:13:39.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.193 issued rwts: total=48,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.193 00:13:39.193 Run status group 0 (all jobs): 00:13:39.193 READ: bw=12.0MiB/s (12.6MB/s), 91.5KiB/s-9.99MiB/s (93.6kB/s-10.5MB/s), io=12.1MiB (12.7MB), run=1001-1013msec 00:13:39.193 WRITE: bw=16.9MiB/s (17.7MB/s), 2022KiB/s-11.1MiB/s (2070kB/s-11.6MB/s), io=17.1MiB (17.9MB), run=1001-1013msec 00:13:39.193 00:13:39.193 Disk stats (read/write): 00:13:39.193 nvme0n1: ios=41/512, merge=0/0, ticks=1609/84, in_queue=1693, util=89.68% 00:13:39.193 nvme0n2: ios=2099/2253, merge=0/0, ticks=1061/289, in_queue=1350, util=93.74% 00:13:39.193 nvme0n3: ios=512/512, merge=0/0, ticks=1530/92, in_queue=1622, util=99.46% 00:13:39.193 nvme0n4: ios=89/512, merge=0/0, ticks=847/87, in_queue=934, util=98.02% 00:13:39.193 13:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:39.193 [global] 00:13:39.193 thread=1 00:13:39.193 invalidate=1 00:13:39.193 rw=write 00:13:39.193 time_based=1 00:13:39.193 runtime=1 00:13:39.193 ioengine=libaio 00:13:39.193 direct=1 00:13:39.193 bs=4096 00:13:39.193 iodepth=128 00:13:39.193 norandommap=0 00:13:39.193 numjobs=1 00:13:39.193 00:13:39.193 verify_dump=1 00:13:39.193 verify_backlog=512 00:13:39.193 verify_state_save=0 00:13:39.193 do_verify=1 00:13:39.193 verify=crc32c-intel 00:13:39.193 [job0] 00:13:39.193 filename=/dev/nvme0n1 00:13:39.193 [job1] 00:13:39.193 filename=/dev/nvme0n2 00:13:39.193 [job2] 00:13:39.193 filename=/dev/nvme0n3 00:13:39.193 [job3] 00:13:39.193 filename=/dev/nvme0n4 00:13:39.193 Could not set queue depth (nvme0n1) 00:13:39.193 Could not set queue depth (nvme0n2) 00:13:39.193 Could not set queue depth (nvme0n3) 00:13:39.194 Could not set queue depth (nvme0n4) 00:13:39.471 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:39.471 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:39.471 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:39.471 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:39.471 fio-3.35 00:13:39.471 Starting 4 threads 00:13:40.986 00:13:40.986 job0: (groupid=0, jobs=1): err= 0: pid=556609: Thu Dec 5 13:45:23 2024 00:13:40.986 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:13:40.986 slat (nsec): min=1088, max=14364k, avg=125356.66, stdev=813995.46 00:13:40.986 clat (usec): min=5056, max=52014, avg=15540.52, stdev=9746.14 00:13:40.986 lat (usec): min=5063, max=52041, avg=15665.88, stdev=9827.54 00:13:40.986 clat percentiles (usec): 00:13:40.986 | 1.00th=[ 6128], 5.00th=[ 7504], 10.00th=[ 8586], 20.00th=[10028], 00:13:40.986 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11731], 00:13:40.986 | 70.00th=[14615], 80.00th=[21103], 90.00th=[32900], 95.00th=[40633], 00:13:40.986 | 99.00th=[43779], 99.50th=[43779], 99.90th=[48497], 99.95th=[50070], 00:13:40.986 | 99.99th=[52167] 00:13:40.986 write: IOPS=3760, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1006msec); 0 zone resets 00:13:40.986 slat (nsec): min=1813, max=27300k, avg=140268.22, stdev=823449.32 00:13:40.986 clat (usec): min=5233, max=58090, avg=17667.86, stdev=9803.80 00:13:40.986 lat (usec): min=6019, max=58101, avg=17808.12, stdev=9870.70 00:13:40.986 clat percentiles (usec): 00:13:40.986 | 1.00th=[ 7439], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10421], 00:13:40.986 | 30.00th=[10552], 40.00th=[10683], 50.00th=[13566], 60.00th=[20317], 00:13:40.986 | 70.00th=[21890], 80.00th=[22414], 90.00th=[27132], 95.00th=[41681], 00:13:40.986 | 99.00th=[55313], 99.50th=[56361], 99.90th=[57934], 99.95th=[57934], 00:13:40.986 | 99.99th=[57934] 00:13:40.986 bw ( KiB/s): min=14224, max=15024, per=20.29%, avg=14624.00, stdev=565.69, samples=2 00:13:40.986 iops : min= 3556, max= 3756, avg=3656.00, stdev=141.42, samples=2 00:13:40.986 lat (msec) : 10=15.87%, 20=52.31%, 50=30.72%, 100=1.10% 00:13:40.986 cpu : usr=1.79%, sys=3.48%, ctx=413, majf=0, minf=1 00:13:40.986 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:40.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.986 issued rwts: total=3584,3783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.986 job1: (groupid=0, jobs=1): err= 0: pid=556610: Thu Dec 5 13:45:23 2024 00:13:40.986 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:13:40.986 slat (nsec): min=1423, max=8938.1k, avg=96317.14, stdev=553365.83 00:13:40.986 clat (usec): min=5972, max=27659, avg=12513.91, stdev=3472.67 00:13:40.986 lat (usec): min=5984, max=29376, avg=12610.23, stdev=3512.02 00:13:40.986 clat percentiles (usec): 00:13:40.986 | 1.00th=[ 7242], 5.00th=[ 8029], 10.00th=[ 9241], 20.00th=[ 9765], 00:13:40.986 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11338], 60.00th=[12518], 00:13:40.986 | 70.00th=[13566], 80.00th=[15664], 90.00th=[17957], 95.00th=[19006], 00:13:40.986 | 99.00th=[21627], 99.50th=[22414], 99.90th=[26870], 99.95th=[26870], 00:13:40.987 | 99.99th=[27657] 00:13:40.987 write: IOPS=4640, BW=18.1MiB/s (19.0MB/s)(18.3MiB/1007msec); 0 zone resets 00:13:40.987 slat (nsec): min=1981, max=7653.0k, avg=112362.43, stdev=590701.21 00:13:40.987 clat (usec): min=298, max=40752, avg=15006.80, stdev=8340.10 00:13:40.987 lat (usec): min=317, max=40760, avg=15119.16, stdev=8408.22 00:13:40.987 clat percentiles (usec): 00:13:40.987 | 1.00th=[ 2343], 5.00th=[ 7373], 10.00th=[ 9241], 20.00th=[ 9765], 00:13:40.987 | 30.00th=[10159], 40.00th=[10290], 50.00th=[11469], 60.00th=[13173], 00:13:40.987 | 70.00th=[15926], 80.00th=[17171], 90.00th=[30016], 95.00th=[33817], 00:13:40.987 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:13:40.987 | 99.99th=[40633] 00:13:40.987 bw ( KiB/s): min=16904, max=19960, per=25.57%, avg=18432.00, stdev=2160.92, samples=2 00:13:40.987 iops : min= 4226, max= 4990, avg=4608.00, stdev=540.23, samples=2 00:13:40.987 lat (usec) : 500=0.02%, 1000=0.03% 00:13:40.987 lat (msec) : 2=0.37%, 4=0.27%, 10=25.59%, 20=62.84%, 50=10.88% 00:13:40.987 cpu : usr=4.47%, sys=5.37%, ctx=518, majf=0, minf=1 00:13:40.987 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:40.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.987 issued rwts: total=4608,4673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.987 job2: (groupid=0, jobs=1): err= 0: pid=556612: Thu Dec 5 13:45:23 2024 00:13:40.987 read: IOPS=4026, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1007msec) 00:13:40.987 slat (nsec): min=1213, max=17796k, avg=116782.64, stdev=883755.71 00:13:40.987 clat (usec): min=3137, max=45742, avg=15205.20, stdev=5462.06 00:13:40.987 lat (usec): min=3647, max=45745, avg=15321.98, stdev=5533.61 00:13:40.987 clat percentiles (usec): 00:13:40.987 | 1.00th=[ 5276], 5.00th=[ 8160], 10.00th=[ 9896], 20.00th=[11731], 00:13:40.987 | 30.00th=[12125], 40.00th=[12518], 50.00th=[14484], 60.00th=[16450], 00:13:40.987 | 70.00th=[17171], 80.00th=[17695], 90.00th=[22152], 95.00th=[25822], 00:13:40.987 | 99.00th=[35914], 99.50th=[37487], 99.90th=[39584], 99.95th=[39584], 00:13:40.987 | 99.99th=[45876] 00:13:40.987 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:13:40.987 slat (nsec): min=1867, max=11052k, avg=110254.40, stdev=658015.80 00:13:40.987 clat (usec): min=862, max=42347, avg=16140.84, stdev=6433.45 00:13:40.987 lat (usec): min=875, max=42353, avg=16251.09, stdev=6493.26 00:13:40.987 clat percentiles (usec): 00:13:40.987 | 1.00th=[ 5145], 5.00th=[ 7046], 10.00th=[ 8455], 20.00th=[11076], 00:13:40.987 | 30.00th=[11731], 40.00th=[12911], 50.00th=[15008], 60.00th=[16319], 00:13:40.987 | 70.00th=[20841], 80.00th=[22414], 90.00th=[25035], 95.00th=[25822], 00:13:40.987 | 99.00th=[33424], 99.50th=[34866], 99.90th=[41157], 99.95th=[42206], 00:13:40.987 | 99.99th=[42206] 00:13:40.987 bw ( KiB/s): min=13264, max=19504, per=22.73%, avg=16384.00, stdev=4412.35, samples=2 00:13:40.987 iops : min= 3316, max= 4876, avg=4096.00, stdev=1103.09, samples=2 00:13:40.987 lat (usec) : 1000=0.04% 00:13:40.987 lat (msec) : 2=0.02%, 4=0.26%, 10=11.25%, 20=67.11%, 50=21.32% 00:13:40.987 cpu : usr=3.28%, sys=4.37%, ctx=346, majf=0, minf=1 00:13:40.987 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:40.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.987 issued rwts: total=4055,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.987 job3: (groupid=0, jobs=1): err= 0: pid=556613: Thu Dec 5 13:45:23 2024 00:13:40.987 read: IOPS=5199, BW=20.3MiB/s (21.3MB/s)(20.5MiB/1009msec) 00:13:40.987 slat (nsec): min=1273, max=20889k, avg=107624.95, stdev=825016.25 00:13:40.987 clat (usec): min=755, max=54233, avg=13202.73, stdev=5062.24 00:13:40.987 lat (usec): min=4213, max=54262, avg=13310.35, stdev=5124.10 00:13:40.987 clat percentiles (usec): 00:13:40.987 | 1.00th=[ 4817], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10552], 00:13:40.987 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:13:40.987 | 70.00th=[13042], 80.00th=[16057], 90.00th=[19006], 95.00th=[24773], 00:13:40.987 | 99.00th=[30540], 99.50th=[30802], 99.90th=[36963], 99.95th=[36963], 00:13:40.987 | 99.99th=[54264] 00:13:40.987 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:13:40.987 slat (usec): min=2, max=9160, avg=72.52, stdev=360.35 00:13:40.987 clat (usec): min=2495, max=31202, avg=10405.19, stdev=2218.56 00:13:40.987 lat (usec): min=2509, max=31206, avg=10477.71, stdev=2248.63 00:13:40.987 clat percentiles (usec): 00:13:40.987 | 1.00th=[ 3490], 5.00th=[ 5407], 10.00th=[ 7177], 20.00th=[ 9503], 00:13:40.987 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11207], 60.00th=[11338], 00:13:40.987 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11731], 95.00th=[11863], 00:13:40.987 | 99.00th=[17433], 99.50th=[17433], 99.90th=[21103], 99.95th=[21103], 00:13:40.987 | 99.99th=[31327] 00:13:40.987 bw ( KiB/s): min=20600, max=24440, per=31.24%, avg=22520.00, stdev=2715.29, samples=2 00:13:40.987 iops : min= 5150, max= 6110, avg=5630.00, stdev=678.82, samples=2 00:13:40.987 lat (usec) : 1000=0.01% 00:13:40.987 lat (msec) : 4=0.88%, 10=17.82%, 20=77.07%, 50=4.20%, 100=0.01% 00:13:40.987 cpu : usr=3.97%, sys=6.35%, ctx=664, majf=0, minf=1 00:13:40.987 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:40.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.987 issued rwts: total=5246,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.987 00:13:40.987 Run status group 0 (all jobs): 00:13:40.987 READ: bw=67.7MiB/s (71.0MB/s), 13.9MiB/s-20.3MiB/s (14.6MB/s-21.3MB/s), io=68.3MiB (71.7MB), run=1006-1009msec 00:13:40.987 WRITE: bw=70.4MiB/s (73.8MB/s), 14.7MiB/s-21.8MiB/s (15.4MB/s-22.9MB/s), io=71.0MiB (74.5MB), run=1006-1009msec 00:13:40.987 00:13:40.987 Disk stats (read/write): 00:13:40.987 nvme0n1: ios=2925/3072, merge=0/0, ticks=24017/21378, in_queue=45395, util=87.88% 00:13:40.987 nvme0n2: ios=3634/3935, merge=0/0, ticks=22357/28071, in_queue=50428, util=86.45% 00:13:40.987 nvme0n3: ios=3072/3478, merge=0/0, ticks=35860/45279, in_queue=81139, util=87.80% 00:13:40.987 nvme0n4: ios=4152/4543, merge=0/0, ticks=52946/46298, in_queue=99244, util=90.56% 00:13:40.987 13:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:40.987 [global] 00:13:40.987 thread=1 00:13:40.987 invalidate=1 00:13:40.987 rw=randwrite 00:13:40.987 time_based=1 00:13:40.987 runtime=1 00:13:40.987 ioengine=libaio 00:13:40.987 direct=1 00:13:40.987 bs=4096 00:13:40.987 iodepth=128 00:13:40.987 norandommap=0 00:13:40.987 numjobs=1 00:13:40.987 00:13:40.987 verify_dump=1 00:13:40.987 verify_backlog=512 00:13:40.987 verify_state_save=0 00:13:40.987 do_verify=1 00:13:40.987 verify=crc32c-intel 00:13:40.987 [job0] 00:13:40.987 filename=/dev/nvme0n1 00:13:40.987 [job1] 00:13:40.987 filename=/dev/nvme0n2 00:13:40.987 [job2] 00:13:40.987 filename=/dev/nvme0n3 00:13:40.987 [job3] 00:13:40.987 filename=/dev/nvme0n4 00:13:40.987 Could not set queue depth (nvme0n1) 00:13:40.987 Could not set queue depth (nvme0n2) 00:13:40.988 Could not set queue depth (nvme0n3) 00:13:40.988 Could not set queue depth (nvme0n4) 00:13:41.247 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:41.247 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:41.247 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:41.247 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:41.247 fio-3.35 00:13:41.247 Starting 4 threads 00:13:42.620 00:13:42.620 job0: (groupid=0, jobs=1): err= 0: pid=556997: Thu Dec 5 13:45:24 2024 00:13:42.620 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:13:42.620 slat (nsec): min=1307, max=15109k, avg=102441.74, stdev=763985.12 00:13:42.620 clat (usec): min=978, max=30929, avg=12594.67, stdev=3673.04 00:13:42.620 lat (usec): min=1024, max=37449, avg=12697.11, stdev=3747.40 00:13:42.620 clat percentiles (usec): 00:13:42.620 | 1.00th=[ 5669], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[ 9896], 00:13:42.620 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11731], 60.00th=[12125], 00:13:42.620 | 70.00th=[13173], 80.00th=[15270], 90.00th=[17171], 95.00th=[19792], 00:13:42.620 | 99.00th=[26346], 99.50th=[28705], 99.90th=[30016], 99.95th=[30016], 00:13:42.620 | 99.99th=[30802] 00:13:42.620 write: IOPS=4826, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1009msec); 0 zone resets 00:13:42.620 slat (usec): min=2, max=13512, avg=100.97, stdev=592.09 00:13:42.620 clat (usec): min=571, max=54333, avg=14376.40, stdev=7700.48 00:13:42.620 lat (usec): min=582, max=54344, avg=14477.37, stdev=7751.31 00:13:42.620 clat percentiles (usec): 00:13:42.620 | 1.00th=[ 1876], 5.00th=[ 6456], 10.00th=[ 8225], 20.00th=[ 9765], 00:13:42.620 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11338], 60.00th=[11994], 00:13:42.620 | 70.00th=[16909], 80.00th=[21103], 90.00th=[21627], 95.00th=[27919], 00:13:42.620 | 99.00th=[43779], 99.50th=[49546], 99.90th=[54264], 99.95th=[54264], 00:13:42.620 | 99.99th=[54264] 00:13:42.620 bw ( KiB/s): min=16384, max=21560, per=26.56%, avg=18972.00, stdev=3659.98, samples=2 00:13:42.620 iops : min= 4096, max= 5390, avg=4743.00, stdev=915.00, samples=2 00:13:42.620 lat (usec) : 750=0.06%, 1000=0.14% 00:13:42.620 lat (msec) : 2=0.53%, 4=0.54%, 10=22.48%, 20=61.61%, 50=14.40% 00:13:42.620 lat (msec) : 100=0.24% 00:13:42.620 cpu : usr=3.27%, sys=5.46%, ctx=507, majf=0, minf=1 00:13:42.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:42.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:42.620 issued rwts: total=4608,4870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.620 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:42.620 job1: (groupid=0, jobs=1): err= 0: pid=556998: Thu Dec 5 13:45:24 2024 00:13:42.620 read: IOPS=4521, BW=17.7MiB/s (18.5MB/s)(18.5MiB/1047msec) 00:13:42.620 slat (nsec): min=1215, max=16970k, avg=117803.93, stdev=817978.53 00:13:42.620 clat (usec): min=3894, max=65463, avg=14536.31, stdev=9066.63 00:13:42.620 lat (usec): min=3904, max=65471, avg=14654.11, stdev=9121.68 00:13:42.620 clat percentiles (usec): 00:13:42.620 | 1.00th=[ 4817], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9896], 00:13:42.620 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11600], 60.00th=[12256], 00:13:42.620 | 70.00th=[15008], 80.00th=[17171], 90.00th=[21627], 95.00th=[26608], 00:13:42.620 | 99.00th=[58983], 99.50th=[61080], 99.90th=[65274], 99.95th=[65274], 00:13:42.620 | 99.99th=[65274] 00:13:42.620 write: IOPS=4890, BW=19.1MiB/s (20.0MB/s)(20.0MiB/1047msec); 0 zone resets 00:13:42.620 slat (nsec): min=1998, max=11404k, avg=82400.98, stdev=333398.58 00:13:42.620 clat (usec): min=1387, max=78867, avg=12510.77, stdev=9654.02 00:13:42.620 lat (usec): min=1428, max=78871, avg=12593.17, stdev=9694.38 00:13:42.620 clat percentiles (usec): 00:13:42.620 | 1.00th=[ 3359], 5.00th=[ 5014], 10.00th=[ 6783], 20.00th=[ 9503], 00:13:42.620 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:13:42.620 | 70.00th=[11731], 80.00th=[12125], 90.00th=[17171], 95.00th=[21890], 00:13:42.620 | 99.00th=[74974], 99.50th=[77071], 99.90th=[79168], 99.95th=[79168], 00:13:42.620 | 99.99th=[79168] 00:13:42.620 bw ( KiB/s): min=18368, max=22576, per=28.66%, avg=20472.00, stdev=2975.51, samples=2 00:13:42.620 iops : min= 4592, max= 5644, avg=5118.00, stdev=743.88, samples=2 00:13:42.620 lat (msec) : 2=0.19%, 4=1.14%, 10=20.51%, 20=67.35%, 50=8.24% 00:13:42.620 lat (msec) : 100=2.57% 00:13:42.620 cpu : usr=3.35%, sys=4.40%, ctx=708, majf=0, minf=1 00:13:42.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:42.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:42.620 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.620 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:42.620 job2: (groupid=0, jobs=1): err= 0: pid=556999: Thu Dec 5 13:45:24 2024 00:13:42.620 read: IOPS=4774, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1003msec) 00:13:42.620 slat (nsec): min=1373, max=10371k, avg=112060.78, stdev=740795.09 00:13:42.620 clat (usec): min=1468, max=51327, avg=14242.81, stdev=6887.31 00:13:42.620 lat (usec): min=4602, max=51362, avg=14354.87, stdev=6945.71 00:13:42.620 clat percentiles (usec): 00:13:42.620 | 1.00th=[ 6128], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[11076], 00:13:42.620 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12125], 60.00th=[12780], 00:13:42.620 | 70.00th=[13698], 80.00th=[15533], 90.00th=[18744], 95.00th=[33817], 00:13:42.620 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45876], 99.95th=[46924], 00:13:42.620 | 99.99th=[51119] 00:13:42.620 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:13:42.620 slat (usec): min=2, max=12358, avg=82.45, stdev=478.02 00:13:42.620 clat (usec): min=718, max=36673, avg=11484.90, stdev=3463.11 00:13:42.620 lat (usec): min=784, max=36684, avg=11567.35, stdev=3504.33 00:13:42.620 clat percentiles (usec): 00:13:42.620 | 1.00th=[ 2704], 5.00th=[ 5735], 10.00th=[ 7570], 20.00th=[10159], 00:13:42.620 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11600], 60.00th=[11731], 00:13:42.620 | 70.00th=[11863], 80.00th=[13173], 90.00th=[13829], 95.00th=[15401], 00:13:42.620 | 99.00th=[23725], 99.50th=[30278], 99.90th=[35390], 99.95th=[36439], 00:13:42.620 | 99.99th=[36439] 00:13:42.620 bw ( KiB/s): min=17104, max=23856, per=28.68%, avg=20480.00, stdev=4774.38, samples=2 00:13:42.620 iops : min= 4276, max= 5964, avg=5120.00, stdev=1193.60, samples=2 00:13:42.620 lat (usec) : 750=0.01%, 1000=0.08% 00:13:42.620 lat (msec) : 2=0.20%, 4=0.74%, 10=13.20%, 20=80.68%, 50=5.08% 00:13:42.620 lat (msec) : 100=0.01% 00:13:42.620 cpu : usr=3.79%, sys=6.09%, ctx=575, majf=0, minf=1 00:13:42.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:42.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:42.620 issued rwts: total=4789,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.620 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:42.620 job3: (groupid=0, jobs=1): err= 0: pid=557000: Thu Dec 5 13:45:24 2024 00:13:42.620 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:13:42.620 slat (nsec): min=1197, max=23887k, avg=156419.87, stdev=1182349.80 00:13:42.620 clat (usec): min=1349, max=63992, avg=18746.77, stdev=10012.58 00:13:42.620 lat (usec): min=3907, max=64014, avg=18903.19, stdev=10123.65 00:13:42.620 clat percentiles (usec): 00:13:42.620 | 1.00th=[ 7439], 5.00th=[10683], 10.00th=[12518], 20.00th=[12911], 00:13:42.620 | 30.00th=[13173], 40.00th=[14222], 50.00th=[14615], 60.00th=[15664], 00:13:42.620 | 70.00th=[17171], 80.00th=[23987], 90.00th=[32637], 95.00th=[45351], 00:13:42.620 | 99.00th=[54264], 99.50th=[55837], 99.90th=[57410], 99.95th=[60031], 00:13:42.620 | 99.99th=[64226] 00:13:42.620 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:13:42.620 slat (usec): min=2, max=10691, avg=104.15, stdev=591.38 00:13:42.620 clat (usec): min=3029, max=57260, avg=16654.51, stdev=8615.75 00:13:42.620 lat (usec): min=3036, max=58666, avg=16758.67, stdev=8650.11 00:13:42.620 clat percentiles (usec): 00:13:42.620 | 1.00th=[ 4490], 5.00th=[ 7111], 10.00th=[ 9372], 20.00th=[12256], 00:13:42.620 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13698], 60.00th=[14484], 00:13:42.620 | 70.00th=[20055], 80.00th=[21365], 90.00th=[21627], 95.00th=[36439], 00:13:42.620 | 99.00th=[52691], 99.50th=[55313], 99.90th=[56361], 99.95th=[57410], 00:13:42.620 | 99.99th=[57410] 00:13:42.620 bw ( KiB/s): min=11768, max=16904, per=20.07%, avg=14336.00, stdev=3631.70, samples=2 00:13:42.620 iops : min= 2942, max= 4226, avg=3584.00, stdev=907.93, samples=2 00:13:42.620 lat (msec) : 2=0.01%, 4=0.27%, 10=7.60%, 20=63.44%, 50=26.43% 00:13:42.620 lat (msec) : 100=2.25% 00:13:42.620 cpu : usr=1.99%, sys=4.69%, ctx=320, majf=0, minf=1 00:13:42.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:42.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:42.620 issued rwts: total=3577,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.620 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:42.620 00:13:42.620 Run status group 0 (all jobs): 00:13:42.620 READ: bw=66.1MiB/s (69.3MB/s), 13.9MiB/s-18.7MiB/s (14.6MB/s-19.6MB/s), io=69.2MiB (72.5MB), run=1003-1047msec 00:13:42.620 WRITE: bw=69.7MiB/s (73.1MB/s), 13.9MiB/s-19.9MiB/s (14.6MB/s-20.9MB/s), io=73.0MiB (76.6MB), run=1003-1047msec 00:13:42.620 00:13:42.620 Disk stats (read/write): 00:13:42.620 nvme0n1: ios=3735/4096, merge=0/0, ticks=45471/60068, in_queue=105539, util=89.08% 00:13:42.620 nvme0n2: ios=4145/4143, merge=0/0, ticks=53537/52347, in_queue=105884, util=87.83% 00:13:42.620 nvme0n3: ios=4121/4401, merge=0/0, ticks=47604/41548, in_queue=89152, util=98.34% 00:13:42.620 nvme0n4: ios=2673/3072, merge=0/0, ticks=40108/31630, in_queue=71738, util=100.00% 00:13:42.620 13:45:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:42.620 13:45:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=557229 00:13:42.620 13:45:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:42.620 13:45:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:42.620 [global] 00:13:42.620 thread=1 00:13:42.620 invalidate=1 00:13:42.620 rw=read 00:13:42.620 time_based=1 00:13:42.620 runtime=10 00:13:42.620 ioengine=libaio 00:13:42.620 direct=1 00:13:42.620 bs=4096 00:13:42.620 iodepth=1 00:13:42.621 norandommap=1 00:13:42.621 numjobs=1 00:13:42.621 00:13:42.621 [job0] 00:13:42.621 filename=/dev/nvme0n1 00:13:42.621 [job1] 00:13:42.621 filename=/dev/nvme0n2 00:13:42.621 [job2] 00:13:42.621 filename=/dev/nvme0n3 00:13:42.621 [job3] 00:13:42.621 filename=/dev/nvme0n4 00:13:42.621 Could not set queue depth (nvme0n1) 00:13:42.621 Could not set queue depth (nvme0n2) 00:13:42.621 Could not set queue depth (nvme0n3) 00:13:42.621 Could not set queue depth (nvme0n4) 00:13:42.621 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.621 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.621 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.621 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:42.621 fio-3.35 00:13:42.621 Starting 4 threads 00:13:45.898 13:45:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:45.898 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=45731840, buflen=4096 00:13:45.898 fio: pid=557375, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:45.898 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:45.898 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:45.898 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:45.898 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=24100864, buflen=4096 00:13:45.898 fio: pid=557374, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:45.898 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=58847232, buflen=4096 00:13:45.898 fio: pid=557372, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:46.156 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:46.156 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:46.156 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:46.156 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:46.415 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1142784, buflen=4096 00:13:46.415 fio: pid=557373, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:46.415 00:13:46.415 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=557372: Thu Dec 5 13:45:28 2024 00:13:46.415 read: IOPS=4539, BW=17.7MiB/s (18.6MB/s)(56.1MiB/3165msec) 00:13:46.415 slat (usec): min=6, max=15579, avg= 9.28, stdev=165.89 00:13:46.415 clat (usec): min=155, max=40590, avg=208.47, stdev=337.36 00:13:46.415 lat (usec): min=162, max=40597, avg=217.75, stdev=377.15 00:13:46.415 clat percentiles (usec): 00:13:46.415 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 194], 00:13:46.415 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 208], 00:13:46.415 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 229], 00:13:46.415 | 99.00th=[ 247], 99.50th=[ 258], 99.90th=[ 334], 99.95th=[ 408], 00:13:46.415 | 99.99th=[ 1057] 00:13:46.415 bw ( KiB/s): min=17344, max=18704, per=49.52%, avg=18287.50, stdev=510.33, samples=6 00:13:46.415 iops : min= 4336, max= 4676, avg=4571.83, stdev=127.60, samples=6 00:13:46.415 lat (usec) : 250=99.16%, 500=0.81%, 750=0.01% 00:13:46.415 lat (msec) : 2=0.01%, 50=0.01% 00:13:46.415 cpu : usr=0.82%, sys=4.36%, ctx=14373, majf=0, minf=1 00:13:46.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.415 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.415 issued rwts: total=14368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.415 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=557373: Thu Dec 5 13:45:28 2024 00:13:46.415 read: IOPS=81, BW=325KiB/s (333kB/s)(1116KiB/3433msec) 00:13:46.415 slat (usec): min=4, max=2717, avg=19.89, stdev=161.97 00:13:46.415 clat (usec): min=183, max=42049, avg=12188.37, stdev=18654.77 00:13:46.415 lat (usec): min=189, max=43919, avg=12208.25, stdev=18674.23 00:13:46.415 clat percentiles (usec): 00:13:46.415 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 204], 00:13:46.415 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 233], 00:13:46.415 | 70.00th=[ 502], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:13:46.415 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:46.415 | 99.99th=[42206] 00:13:46.415 bw ( KiB/s): min= 96, max= 1592, per=0.97%, avg=357.00, stdev=605.19, samples=6 00:13:46.415 iops : min= 24, max= 398, avg=89.17, stdev=151.34, samples=6 00:13:46.415 lat (usec) : 250=64.64%, 500=4.64%, 750=1.07% 00:13:46.415 lat (msec) : 20=0.36%, 50=28.93% 00:13:46.415 cpu : usr=0.00%, sys=0.12%, ctx=282, majf=0, minf=2 00:13:46.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.415 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.415 issued rwts: total=280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.415 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=557374: Thu Dec 5 13:45:28 2024 00:13:46.415 read: IOPS=1982, BW=7927KiB/s (8118kB/s)(23.0MiB/2969msec) 00:13:46.415 slat (nsec): min=6347, max=65842, avg=7363.46, stdev=1349.45 00:13:46.415 clat (usec): min=169, max=41968, avg=492.50, stdev=3344.98 00:13:46.416 lat (usec): min=176, max=41979, avg=499.85, stdev=3345.34 00:13:46.416 clat percentiles (usec): 00:13:46.416 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:13:46.416 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:13:46.416 | 70.00th=[ 223], 80.00th=[ 227], 90.00th=[ 235], 95.00th=[ 241], 00:13:46.416 | 99.00th=[ 285], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:13:46.416 | 99.99th=[42206] 00:13:46.416 bw ( KiB/s): min= 144, max=17808, per=17.55%, avg=6483.20, stdev=8042.72, samples=5 00:13:46.416 iops : min= 36, max= 4452, avg=1620.80, stdev=2010.68, samples=5 00:13:46.416 lat (usec) : 250=97.37%, 500=1.92%, 750=0.02% 00:13:46.416 lat (msec) : 50=0.68% 00:13:46.416 cpu : usr=0.54%, sys=1.79%, ctx=5886, majf=0, minf=2 00:13:46.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.416 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.416 issued rwts: total=5885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.416 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.416 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=557375: Thu Dec 5 13:45:28 2024 00:13:46.416 read: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(43.6MiB/2726msec) 00:13:46.416 slat (nsec): min=7069, max=40785, avg=8233.97, stdev=1342.52 00:13:46.416 clat (usec): min=171, max=526, avg=233.25, stdev=31.49 00:13:46.416 lat (usec): min=179, max=534, avg=241.48, stdev=31.53 00:13:46.416 clat percentiles (usec): 00:13:46.416 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:13:46.416 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:13:46.416 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 285], 00:13:46.416 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 322], 99.95th=[ 330], 00:13:46.416 | 99.99th=[ 457] 00:13:46.416 bw ( KiB/s): min=15632, max=16776, per=44.51%, avg=16436.80, stdev=458.40, samples=5 00:13:46.416 iops : min= 3908, max= 4194, avg=4109.20, stdev=114.60, samples=5 00:13:46.416 lat (usec) : 250=68.46%, 500=31.52%, 750=0.01% 00:13:46.416 cpu : usr=2.02%, sys=6.79%, ctx=11168, majf=0, minf=2 00:13:46.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.416 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.416 issued rwts: total=11166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.416 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.416 00:13:46.416 Run status group 0 (all jobs): 00:13:46.416 READ: bw=36.1MiB/s (37.8MB/s), 325KiB/s-17.7MiB/s (333kB/s-18.6MB/s), io=124MiB (130MB), run=2726-3433msec 00:13:46.416 00:13:46.416 Disk stats (read/write): 00:13:46.416 nvme0n1: ios=14196/0, merge=0/0, ticks=3240/0, in_queue=3240, util=98.37% 00:13:46.416 nvme0n2: ios=277/0, merge=0/0, ticks=3319/0, in_queue=3319, util=96.29% 00:13:46.416 nvme0n3: ios=5660/0, merge=0/0, ticks=2769/0, in_queue=2769, util=96.52% 00:13:46.416 nvme0n4: ios=10696/0, merge=0/0, ticks=2361/0, in_queue=2361, util=96.45% 00:13:46.416 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:46.416 13:45:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:46.674 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:46.674 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:46.932 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:46.932 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:47.190 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:47.190 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:47.190 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:47.190 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 557229 00:13:47.190 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:47.190 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.448 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.448 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:13:47.448 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:47.448 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.448 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:47.448 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.448 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:13:47.448 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:47.448 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:47.448 nvmf hotplug test: fio failed as expected 00:13:47.448 13:45:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:47.708 rmmod nvme_tcp 00:13:47.708 rmmod nvme_fabrics 00:13:47.708 rmmod nvme_keyring 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 554286 ']' 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 554286 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 554286 ']' 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 554286 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 554286 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 554286' 00:13:47.708 killing process with pid 554286 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 554286 00:13:47.708 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 554286 00:13:47.967 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:47.967 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:47.967 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:47.967 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:47.967 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:13:47.967 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:47.967 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:47.967 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:47.967 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:47.967 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.967 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.967 13:45:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.869 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:50.128 00:13:50.128 real 0m27.549s 00:13:50.128 user 1m49.808s 00:13:50.128 sys 0m8.942s 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.129 ************************************ 00:13:50.129 END TEST nvmf_fio_target 00:13:50.129 ************************************ 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:50.129 ************************************ 00:13:50.129 START TEST nvmf_bdevio 00:13:50.129 ************************************ 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:50.129 * Looking for test storage... 00:13:50.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.129 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:50.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.388 --rc genhtml_branch_coverage=1 00:13:50.388 --rc genhtml_function_coverage=1 00:13:50.388 --rc genhtml_legend=1 00:13:50.388 --rc geninfo_all_blocks=1 00:13:50.388 --rc geninfo_unexecuted_blocks=1 00:13:50.388 00:13:50.388 ' 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:50.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.388 --rc genhtml_branch_coverage=1 00:13:50.388 --rc genhtml_function_coverage=1 00:13:50.388 --rc genhtml_legend=1 00:13:50.388 --rc geninfo_all_blocks=1 00:13:50.388 --rc geninfo_unexecuted_blocks=1 00:13:50.388 00:13:50.388 ' 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:50.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.388 --rc genhtml_branch_coverage=1 00:13:50.388 --rc genhtml_function_coverage=1 00:13:50.388 --rc genhtml_legend=1 00:13:50.388 --rc geninfo_all_blocks=1 00:13:50.388 --rc geninfo_unexecuted_blocks=1 00:13:50.388 00:13:50.388 ' 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:50.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.388 --rc genhtml_branch_coverage=1 00:13:50.388 --rc genhtml_function_coverage=1 00:13:50.388 --rc genhtml_legend=1 00:13:50.388 --rc geninfo_all_blocks=1 00:13:50.388 --rc geninfo_unexecuted_blocks=1 00:13:50.388 00:13:50.388 ' 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.388 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:50.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:13:50.389 13:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.954 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:56.955 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:56.955 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:56.955 Found net devices under 0000:86:00.0: cvl_0_0 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:56.955 Found net devices under 0000:86:00.1: cvl_0_1 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:56.955 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:56.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:13:56.956 00:13:56.956 --- 10.0.0.2 ping statistics --- 00:13:56.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.956 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:13:56.956 00:13:56.956 --- 10.0.0.1 ping statistics --- 00:13:56.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.956 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=561847 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 561847 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 561847 ']' 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.956 13:45:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:56.956 [2024-12-05 13:45:38.754917] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:13:56.956 [2024-12-05 13:45:38.754960] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.956 [2024-12-05 13:45:38.833689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:56.956 [2024-12-05 13:45:38.875545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.956 [2024-12-05 13:45:38.875583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.956 [2024-12-05 13:45:38.875593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.956 [2024-12-05 13:45:38.875599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.956 [2024-12-05 13:45:38.875604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.956 [2024-12-05 13:45:38.877235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:56.956 [2024-12-05 13:45:38.877347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:56.956 [2024-12-05 13:45:38.877455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.956 [2024-12-05 13:45:38.877456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:57.214 [2024-12-05 13:45:39.624579] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:57.214 Malloc0 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:57.214 [2024-12-05 13:45:39.689574] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:57.214 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:57.214 { 00:13:57.214 "params": { 00:13:57.214 "name": "Nvme$subsystem", 00:13:57.214 "trtype": "$TEST_TRANSPORT", 00:13:57.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:57.215 "adrfam": "ipv4", 00:13:57.215 "trsvcid": "$NVMF_PORT", 00:13:57.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:57.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:57.215 "hdgst": ${hdgst:-false}, 00:13:57.215 "ddgst": ${ddgst:-false} 00:13:57.215 }, 00:13:57.215 "method": "bdev_nvme_attach_controller" 00:13:57.215 } 00:13:57.215 EOF 00:13:57.215 )") 00:13:57.215 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:13:57.215 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:13:57.215 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:13:57.215 13:45:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:57.215 "params": { 00:13:57.215 "name": "Nvme1", 00:13:57.215 "trtype": "tcp", 00:13:57.215 "traddr": "10.0.0.2", 00:13:57.215 "adrfam": "ipv4", 00:13:57.215 "trsvcid": "4420", 00:13:57.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:57.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:57.215 "hdgst": false, 00:13:57.215 "ddgst": false 00:13:57.215 }, 00:13:57.215 "method": "bdev_nvme_attach_controller" 00:13:57.215 }' 00:13:57.215 [2024-12-05 13:45:39.742045] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:13:57.215 [2024-12-05 13:45:39.742094] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid561922 ] 00:13:57.471 [2024-12-05 13:45:39.820300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:57.471 [2024-12-05 13:45:39.864103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.471 [2024-12-05 13:45:39.864212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.471 [2024-12-05 13:45:39.864212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.728 I/O targets: 00:13:57.728 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:57.728 00:13:57.728 00:13:57.728 CUnit - A unit testing framework for C - Version 2.1-3 00:13:57.728 http://cunit.sourceforge.net/ 00:13:57.728 00:13:57.728 00:13:57.728 Suite: bdevio tests on: Nvme1n1 00:13:57.728 Test: blockdev write read block ...passed 00:13:57.728 Test: blockdev write zeroes read block ...passed 00:13:57.728 Test: blockdev write zeroes read no split ...passed 00:13:57.728 Test: blockdev write zeroes read split ...passed 00:13:57.728 Test: blockdev write zeroes read split partial ...passed 00:13:57.728 Test: blockdev reset ...[2024-12-05 13:45:40.222326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:57.728 [2024-12-05 13:45:40.222393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e0350 (9): Bad file descriptor 00:13:57.729 [2024-12-05 13:45:40.277372] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:57.729 passed 00:13:57.984 Test: blockdev write read 8 blocks ...passed 00:13:57.984 Test: blockdev write read size > 128k ...passed 00:13:57.984 Test: blockdev write read invalid size ...passed 00:13:57.984 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.984 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.984 Test: blockdev write read max offset ...passed 00:13:57.984 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.984 Test: blockdev writev readv 8 blocks ...passed 00:13:57.984 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.984 Test: blockdev writev readv block ...passed 00:13:57.984 Test: blockdev writev readv size > 128k ...passed 00:13:57.984 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.984 Test: blockdev comparev and writev ...[2024-12-05 13:45:40.486959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:57.984 [2024-12-05 13:45:40.486986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:57.984 [2024-12-05 13:45:40.487000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:57.984 [2024-12-05 13:45:40.487012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:57.984 [2024-12-05 13:45:40.487238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:57.984 [2024-12-05 13:45:40.487248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:57.984 [2024-12-05 13:45:40.487260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:57.984 [2024-12-05 13:45:40.487267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:57.984 [2024-12-05 13:45:40.487506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:57.984 [2024-12-05 13:45:40.487516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:57.984 [2024-12-05 13:45:40.487528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:57.984 [2024-12-05 13:45:40.487535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:57.985 [2024-12-05 13:45:40.487761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:57.985 [2024-12-05 13:45:40.487771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:57.985 [2024-12-05 13:45:40.487782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:57.985 [2024-12-05 13:45:40.487789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:57.985 passed 00:13:58.241 Test: blockdev nvme passthru rw ...passed 00:13:58.241 Test: blockdev nvme passthru vendor specific ...[2024-12-05 13:45:40.571742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:58.241 [2024-12-05 13:45:40.571766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:58.241 [2024-12-05 13:45:40.571873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:58.241 [2024-12-05 13:45:40.571884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:58.241 [2024-12-05 13:45:40.571981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:58.241 [2024-12-05 13:45:40.571991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:58.241 [2024-12-05 13:45:40.572094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:58.241 [2024-12-05 13:45:40.572108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:58.241 passed 00:13:58.241 Test: blockdev nvme admin passthru ...passed 00:13:58.241 Test: blockdev copy ...passed 00:13:58.241 00:13:58.241 Run Summary: Type Total Ran Passed Failed Inactive 00:13:58.241 suites 1 1 n/a 0 0 00:13:58.241 tests 23 23 23 0 0 00:13:58.241 asserts 152 152 152 0 n/a 00:13:58.241 00:13:58.241 Elapsed time = 1.119 seconds 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:58.241 rmmod nvme_tcp 00:13:58.241 rmmod nvme_fabrics 00:13:58.241 rmmod nvme_keyring 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 561847 ']' 00:13:58.241 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 561847 00:13:58.498 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 561847 ']' 00:13:58.498 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 561847 00:13:58.498 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:13:58.498 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.498 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 561847 00:13:58.498 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:58.498 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:58.498 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 561847' 00:13:58.498 killing process with pid 561847 00:13:58.498 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 561847 00:13:58.498 13:45:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 561847 00:13:58.498 13:45:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:58.498 13:45:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:58.498 13:45:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:58.498 13:45:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:58.498 13:45:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:13:58.498 13:45:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:58.498 13:45:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:13:58.498 13:45:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:58.498 13:45:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:58.498 13:45:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.498 13:45:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.498 13:45:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.031 13:45:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:01.031 00:14:01.031 real 0m10.601s 00:14:01.031 user 0m12.669s 00:14:01.031 sys 0m5.010s 00:14:01.031 13:45:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.031 13:45:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.031 ************************************ 00:14:01.031 END TEST nvmf_bdevio 00:14:01.031 ************************************ 00:14:01.031 13:45:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:01.031 00:14:01.031 real 4m36.238s 00:14:01.031 user 10m21.354s 00:14:01.031 sys 1m37.692s 00:14:01.031 13:45:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.031 13:45:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:01.031 ************************************ 00:14:01.031 END TEST nvmf_target_core 00:14:01.031 ************************************ 00:14:01.031 13:45:43 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:01.031 13:45:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:01.031 13:45:43 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.031 13:45:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:01.031 ************************************ 00:14:01.031 START TEST nvmf_target_extra 00:14:01.031 ************************************ 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:01.032 * Looking for test storage... 00:14:01.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:01.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.032 --rc genhtml_branch_coverage=1 00:14:01.032 --rc genhtml_function_coverage=1 00:14:01.032 --rc genhtml_legend=1 00:14:01.032 --rc geninfo_all_blocks=1 00:14:01.032 --rc geninfo_unexecuted_blocks=1 00:14:01.032 00:14:01.032 ' 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:01.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.032 --rc genhtml_branch_coverage=1 00:14:01.032 --rc genhtml_function_coverage=1 00:14:01.032 --rc genhtml_legend=1 00:14:01.032 --rc geninfo_all_blocks=1 00:14:01.032 --rc geninfo_unexecuted_blocks=1 00:14:01.032 00:14:01.032 ' 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:01.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.032 --rc genhtml_branch_coverage=1 00:14:01.032 --rc genhtml_function_coverage=1 00:14:01.032 --rc genhtml_legend=1 00:14:01.032 --rc geninfo_all_blocks=1 00:14:01.032 --rc geninfo_unexecuted_blocks=1 00:14:01.032 00:14:01.032 ' 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:01.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.032 --rc genhtml_branch_coverage=1 00:14:01.032 --rc genhtml_function_coverage=1 00:14:01.032 --rc genhtml_legend=1 00:14:01.032 --rc geninfo_all_blocks=1 00:14:01.032 --rc geninfo_unexecuted_blocks=1 00:14:01.032 00:14:01.032 ' 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:01.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.032 ************************************ 00:14:01.032 START TEST nvmf_example 00:14:01.032 ************************************ 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:01.032 * Looking for test storage... 00:14:01.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.032 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:01.033 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:14:01.033 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.292 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:01.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.293 --rc genhtml_branch_coverage=1 00:14:01.293 --rc genhtml_function_coverage=1 00:14:01.293 --rc genhtml_legend=1 00:14:01.293 --rc geninfo_all_blocks=1 00:14:01.293 --rc geninfo_unexecuted_blocks=1 00:14:01.293 00:14:01.293 ' 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:01.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.293 --rc genhtml_branch_coverage=1 00:14:01.293 --rc genhtml_function_coverage=1 00:14:01.293 --rc genhtml_legend=1 00:14:01.293 --rc geninfo_all_blocks=1 00:14:01.293 --rc geninfo_unexecuted_blocks=1 00:14:01.293 00:14:01.293 ' 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:01.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.293 --rc genhtml_branch_coverage=1 00:14:01.293 --rc genhtml_function_coverage=1 00:14:01.293 --rc genhtml_legend=1 00:14:01.293 --rc geninfo_all_blocks=1 00:14:01.293 --rc geninfo_unexecuted_blocks=1 00:14:01.293 00:14:01.293 ' 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:01.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.293 --rc genhtml_branch_coverage=1 00:14:01.293 --rc genhtml_function_coverage=1 00:14:01.293 --rc genhtml_legend=1 00:14:01.293 --rc geninfo_all_blocks=1 00:14:01.293 --rc geninfo_unexecuted_blocks=1 00:14:01.293 00:14:01.293 ' 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:01.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:14:01.293 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.856 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:07.857 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:07.857 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:07.857 Found net devices under 0000:86:00.0: cvl_0_0 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:07.857 Found net devices under 0000:86:00.1: cvl_0_1 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:07.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:14:07.857 00:14:07.857 --- 10.0.0.2 ping statistics --- 00:14:07.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.857 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:14:07.857 00:14:07.857 --- 10.0.0.1 ping statistics --- 00:14:07.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.857 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:14:07.857 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=565816 00:14:07.858 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:07.858 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:14:07.858 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 565816 00:14:07.858 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 565816 ']' 00:14:07.858 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.858 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.858 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.858 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.858 13:45:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:14:08.117 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:08.374 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.374 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:08.374 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.374 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.374 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.374 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:08.374 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.374 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:14:08.374 13:45:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:20.575 Initializing NVMe Controllers 00:14:20.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:20.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:20.575 Initialization complete. Launching workers. 00:14:20.575 ======================================================== 00:14:20.575 Latency(us) 00:14:20.575 Device Information : IOPS MiB/s Average min max 00:14:20.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18513.63 72.32 3456.59 536.82 16695.94 00:14:20.575 ======================================================== 00:14:20.575 Total : 18513.63 72.32 3456.59 536.82 16695.94 00:14:20.575 00:14:20.575 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:20.575 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:20.575 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:20.575 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:14:20.575 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:20.575 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:14:20.575 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:20.575 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:20.575 rmmod nvme_tcp 00:14:20.575 rmmod nvme_fabrics 00:14:20.575 rmmod nvme_keyring 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 565816 ']' 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 565816 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 565816 ']' 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 565816 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 565816 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 565816' 00:14:20.575 killing process with pid 565816 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 565816 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 565816 00:14:20.575 nvmf threads initialize successfully 00:14:20.575 bdev subsystem init successfully 00:14:20.575 created a nvmf target service 00:14:20.575 create targets's poll groups done 00:14:20.575 all subsystems of target started 00:14:20.575 nvmf target is running 00:14:20.575 all subsystems of target stopped 00:14:20.575 destroy targets's poll groups done 00:14:20.575 destroyed the nvmf target service 00:14:20.575 bdev subsystem finish successfully 00:14:20.575 nvmf threads destroy successfully 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.575 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.833 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:20.833 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:20.833 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:20.833 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:20.833 00:14:20.833 real 0m19.914s 00:14:20.833 user 0m46.196s 00:14:20.833 sys 0m6.158s 00:14:20.833 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.833 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:20.833 ************************************ 00:14:20.833 END TEST nvmf_example 00:14:20.833 ************************************ 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.093 ************************************ 00:14:21.093 START TEST nvmf_filesystem 00:14:21.093 ************************************ 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:21.093 * Looking for test storage... 00:14:21.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:21.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.093 --rc genhtml_branch_coverage=1 00:14:21.093 --rc genhtml_function_coverage=1 00:14:21.093 --rc genhtml_legend=1 00:14:21.093 --rc geninfo_all_blocks=1 00:14:21.093 --rc geninfo_unexecuted_blocks=1 00:14:21.093 00:14:21.093 ' 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:21.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.093 --rc genhtml_branch_coverage=1 00:14:21.093 --rc genhtml_function_coverage=1 00:14:21.093 --rc genhtml_legend=1 00:14:21.093 --rc geninfo_all_blocks=1 00:14:21.093 --rc geninfo_unexecuted_blocks=1 00:14:21.093 00:14:21.093 ' 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:21.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.093 --rc genhtml_branch_coverage=1 00:14:21.093 --rc genhtml_function_coverage=1 00:14:21.093 --rc genhtml_legend=1 00:14:21.093 --rc geninfo_all_blocks=1 00:14:21.093 --rc geninfo_unexecuted_blocks=1 00:14:21.093 00:14:21.093 ' 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:21.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.093 --rc genhtml_branch_coverage=1 00:14:21.093 --rc genhtml_function_coverage=1 00:14:21.093 --rc genhtml_legend=1 00:14:21.093 --rc geninfo_all_blocks=1 00:14:21.093 --rc geninfo_unexecuted_blocks=1 00:14:21.093 00:14:21.093 ' 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:14:21.093 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:14:21.094 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:21.094 #define SPDK_CONFIG_H 00:14:21.094 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:21.094 #define SPDK_CONFIG_APPS 1 00:14:21.094 #define SPDK_CONFIG_ARCH native 00:14:21.094 #undef SPDK_CONFIG_ASAN 00:14:21.094 #undef SPDK_CONFIG_AVAHI 00:14:21.094 #undef SPDK_CONFIG_CET 00:14:21.094 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:21.094 #define SPDK_CONFIG_COVERAGE 1 00:14:21.094 #define SPDK_CONFIG_CROSS_PREFIX 00:14:21.094 #undef SPDK_CONFIG_CRYPTO 00:14:21.094 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:21.094 #undef SPDK_CONFIG_CUSTOMOCF 00:14:21.094 #undef SPDK_CONFIG_DAOS 00:14:21.094 #define SPDK_CONFIG_DAOS_DIR 00:14:21.094 #define SPDK_CONFIG_DEBUG 1 00:14:21.094 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:21.094 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:21.094 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:21.094 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:21.094 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:21.094 #undef SPDK_CONFIG_DPDK_UADK 00:14:21.094 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:21.094 #define SPDK_CONFIG_EXAMPLES 1 00:14:21.094 #undef SPDK_CONFIG_FC 00:14:21.094 #define SPDK_CONFIG_FC_PATH 00:14:21.094 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:21.094 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:21.094 #define SPDK_CONFIG_FSDEV 1 00:14:21.094 #undef SPDK_CONFIG_FUSE 00:14:21.094 #undef SPDK_CONFIG_FUZZER 00:14:21.094 #define SPDK_CONFIG_FUZZER_LIB 00:14:21.094 #undef SPDK_CONFIG_GOLANG 00:14:21.094 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:21.094 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:21.094 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:21.094 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:21.094 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:21.094 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:21.094 #undef SPDK_CONFIG_HAVE_LZ4 00:14:21.094 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:21.094 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:21.094 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:21.094 #define SPDK_CONFIG_IDXD 1 00:14:21.094 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:21.094 #undef SPDK_CONFIG_IPSEC_MB 00:14:21.094 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:21.094 #define SPDK_CONFIG_ISAL 1 00:14:21.094 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:21.094 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:21.094 #define SPDK_CONFIG_LIBDIR 00:14:21.094 #undef SPDK_CONFIG_LTO 00:14:21.094 #define SPDK_CONFIG_MAX_LCORES 128 00:14:21.094 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:21.094 #define SPDK_CONFIG_NVME_CUSE 1 00:14:21.094 #undef SPDK_CONFIG_OCF 00:14:21.094 #define SPDK_CONFIG_OCF_PATH 00:14:21.094 #define SPDK_CONFIG_OPENSSL_PATH 00:14:21.094 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:21.094 #define SPDK_CONFIG_PGO_DIR 00:14:21.095 #undef SPDK_CONFIG_PGO_USE 00:14:21.095 #define SPDK_CONFIG_PREFIX /usr/local 00:14:21.095 #undef SPDK_CONFIG_RAID5F 00:14:21.095 #undef SPDK_CONFIG_RBD 00:14:21.095 #define SPDK_CONFIG_RDMA 1 00:14:21.095 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:21.095 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:21.095 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:21.095 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:21.095 #define SPDK_CONFIG_SHARED 1 00:14:21.095 #undef SPDK_CONFIG_SMA 00:14:21.095 #define SPDK_CONFIG_TESTS 1 00:14:21.095 #undef SPDK_CONFIG_TSAN 00:14:21.095 #define SPDK_CONFIG_UBLK 1 00:14:21.095 #define SPDK_CONFIG_UBSAN 1 00:14:21.095 #undef SPDK_CONFIG_UNIT_TESTS 00:14:21.095 #undef SPDK_CONFIG_URING 00:14:21.095 #define SPDK_CONFIG_URING_PATH 00:14:21.095 #undef SPDK_CONFIG_URING_ZNS 00:14:21.095 #undef SPDK_CONFIG_USDT 00:14:21.095 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:21.095 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:21.095 #define SPDK_CONFIG_VFIO_USER 1 00:14:21.095 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:21.095 #define SPDK_CONFIG_VHOST 1 00:14:21.095 #define SPDK_CONFIG_VIRTIO 1 00:14:21.095 #undef SPDK_CONFIG_VTUNE 00:14:21.095 #define SPDK_CONFIG_VTUNE_DIR 00:14:21.095 #define SPDK_CONFIG_WERROR 1 00:14:21.095 #define SPDK_CONFIG_WPDK_DIR 00:14:21.095 #undef SPDK_CONFIG_XNVME 00:14:21.095 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:21.095 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:21.095 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.095 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:14:21.357 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:21.358 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 568164 ]] 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 568164 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.snighG 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:14:21.359 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.snighG/tests/target /tmp/spdk.snighG 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189660413952 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963969536 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6303555584 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971953664 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169753088 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981558784 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=425984 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:14:21.360 * Looking for test storage... 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189660413952 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8518148096 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:21.360 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:21.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.361 --rc genhtml_branch_coverage=1 00:14:21.361 --rc genhtml_function_coverage=1 00:14:21.361 --rc genhtml_legend=1 00:14:21.361 --rc geninfo_all_blocks=1 00:14:21.361 --rc geninfo_unexecuted_blocks=1 00:14:21.361 00:14:21.361 ' 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:21.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.361 --rc genhtml_branch_coverage=1 00:14:21.361 --rc genhtml_function_coverage=1 00:14:21.361 --rc genhtml_legend=1 00:14:21.361 --rc geninfo_all_blocks=1 00:14:21.361 --rc geninfo_unexecuted_blocks=1 00:14:21.361 00:14:21.361 ' 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:21.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.361 --rc genhtml_branch_coverage=1 00:14:21.361 --rc genhtml_function_coverage=1 00:14:21.361 --rc genhtml_legend=1 00:14:21.361 --rc geninfo_all_blocks=1 00:14:21.361 --rc geninfo_unexecuted_blocks=1 00:14:21.361 00:14:21.361 ' 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:21.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.361 --rc genhtml_branch_coverage=1 00:14:21.361 --rc genhtml_function_coverage=1 00:14:21.361 --rc genhtml_legend=1 00:14:21.361 --rc geninfo_all_blocks=1 00:14:21.361 --rc geninfo_unexecuted_blocks=1 00:14:21.361 00:14:21.361 ' 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:21.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:21.361 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.362 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:21.362 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:21.362 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:21.362 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.362 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.362 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.362 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:21.362 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:21.362 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:14:21.362 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:14:27.924 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:27.925 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:27.925 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:27.925 Found net devices under 0000:86:00.0: cvl_0_0 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:27.925 Found net devices under 0000:86:00.1: cvl_0_1 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:27.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:14:27.925 00:14:27.925 --- 10.0.0.2 ping statistics --- 00:14:27.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.925 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:14:27.925 00:14:27.925 --- 10.0.0.1 ping statistics --- 00:14:27.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.925 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:14:27.925 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:27.926 ************************************ 00:14:27.926 START TEST nvmf_filesystem_no_in_capsule 00:14:27.926 ************************************ 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=571361 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 571361 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 571361 ']' 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.926 13:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:27.926 [2024-12-05 13:46:10.047075] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:14:27.926 [2024-12-05 13:46:10.047120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.926 [2024-12-05 13:46:10.126131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.926 [2024-12-05 13:46:10.168952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.926 [2024-12-05 13:46:10.168988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.926 [2024-12-05 13:46:10.168995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.926 [2024-12-05 13:46:10.169001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.926 [2024-12-05 13:46:10.169006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.926 [2024-12-05 13:46:10.170551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.926 [2024-12-05 13:46:10.170660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.926 [2024-12-05 13:46:10.170780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.926 [2024-12-05 13:46:10.170781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:28.492 [2024-12-05 13:46:10.930648] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.492 13:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:28.492 Malloc1 00:14:28.492 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.492 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:28.492 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.492 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:28.751 [2024-12-05 13:46:11.091530] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:28.751 { 00:14:28.751 "name": "Malloc1", 00:14:28.751 "aliases": [ 00:14:28.751 "619049af-34f1-42b2-8328-6c474b579d7f" 00:14:28.751 ], 00:14:28.751 "product_name": "Malloc disk", 00:14:28.751 "block_size": 512, 00:14:28.751 "num_blocks": 1048576, 00:14:28.751 "uuid": "619049af-34f1-42b2-8328-6c474b579d7f", 00:14:28.751 "assigned_rate_limits": { 00:14:28.751 "rw_ios_per_sec": 0, 00:14:28.751 "rw_mbytes_per_sec": 0, 00:14:28.751 "r_mbytes_per_sec": 0, 00:14:28.751 "w_mbytes_per_sec": 0 00:14:28.751 }, 00:14:28.751 "claimed": true, 00:14:28.751 "claim_type": "exclusive_write", 00:14:28.751 "zoned": false, 00:14:28.751 "supported_io_types": { 00:14:28.751 "read": true, 00:14:28.751 "write": true, 00:14:28.751 "unmap": true, 00:14:28.751 "flush": true, 00:14:28.751 "reset": true, 00:14:28.751 "nvme_admin": false, 00:14:28.751 "nvme_io": false, 00:14:28.751 "nvme_io_md": false, 00:14:28.751 "write_zeroes": true, 00:14:28.751 "zcopy": true, 00:14:28.751 "get_zone_info": false, 00:14:28.751 "zone_management": false, 00:14:28.751 "zone_append": false, 00:14:28.751 "compare": false, 00:14:28.751 "compare_and_write": false, 00:14:28.751 "abort": true, 00:14:28.751 "seek_hole": false, 00:14:28.751 "seek_data": false, 00:14:28.751 "copy": true, 00:14:28.751 "nvme_iov_md": false 00:14:28.751 }, 00:14:28.751 "memory_domains": [ 00:14:28.751 { 00:14:28.751 "dma_device_id": "system", 00:14:28.751 "dma_device_type": 1 00:14:28.751 }, 00:14:28.751 { 00:14:28.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.751 "dma_device_type": 2 00:14:28.751 } 00:14:28.751 ], 00:14:28.751 "driver_specific": {} 00:14:28.751 } 00:14:28.751 ]' 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:28.751 13:46:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.127 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:30.127 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:14:30.127 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.127 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:30.127 13:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:32.024 13:46:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:32.958 13:46:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:33.889 ************************************ 00:14:33.889 START TEST filesystem_ext4 00:14:33.889 ************************************ 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:14:33.889 13:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:33.889 mke2fs 1.47.0 (5-Feb-2023) 00:14:33.889 Discarding device blocks: 0/522240 done 00:14:33.889 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:33.889 Filesystem UUID: 235077e5-7a86-4cb4-bf6a-c3270a30cd93 00:14:33.889 Superblock backups stored on blocks: 00:14:33.889 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:33.889 00:14:33.889 Allocating group tables: 0/64 done 00:14:33.889 Writing inode tables: 0/64 done 00:14:35.262 Creating journal (8192 blocks): done 00:14:36.762 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:14:36.762 00:14:36.762 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:14:36.762 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 571361 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:43.320 00:14:43.320 real 0m8.835s 00:14:43.320 user 0m0.029s 00:14:43.320 sys 0m0.073s 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:43.320 ************************************ 00:14:43.320 END TEST filesystem_ext4 00:14:43.320 ************************************ 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:43.320 ************************************ 00:14:43.320 START TEST filesystem_btrfs 00:14:43.320 ************************************ 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:43.320 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:14:43.321 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:43.321 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:14:43.321 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:14:43.321 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:14:43.321 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:14:43.321 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:43.321 btrfs-progs v6.8.1 00:14:43.321 See https://btrfs.readthedocs.io for more information. 00:14:43.321 00:14:43.321 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:43.321 NOTE: several default settings have changed in version 5.15, please make sure 00:14:43.321 this does not affect your deployments: 00:14:43.321 - DUP for metadata (-m dup) 00:14:43.321 - enabled no-holes (-O no-holes) 00:14:43.321 - enabled free-space-tree (-R free-space-tree) 00:14:43.321 00:14:43.321 Label: (null) 00:14:43.321 UUID: 1a024604-ca3a-43bd-ac68-db7a711db54b 00:14:43.321 Node size: 16384 00:14:43.321 Sector size: 4096 (CPU page size: 4096) 00:14:43.321 Filesystem size: 510.00MiB 00:14:43.321 Block group profiles: 00:14:43.321 Data: single 8.00MiB 00:14:43.321 Metadata: DUP 32.00MiB 00:14:43.321 System: DUP 8.00MiB 00:14:43.321 SSD detected: yes 00:14:43.321 Zoned device: no 00:14:43.321 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:43.321 Checksum: crc32c 00:14:43.321 Number of devices: 1 00:14:43.321 Devices: 00:14:43.321 ID SIZE PATH 00:14:43.321 1 510.00MiB /dev/nvme0n1p1 00:14:43.321 00:14:43.321 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:14:43.321 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:43.321 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:43.321 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:43.321 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 571361 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:43.578 00:14:43.578 real 0m0.712s 00:14:43.578 user 0m0.030s 00:14:43.578 sys 0m0.107s 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:43.578 ************************************ 00:14:43.578 END TEST filesystem_btrfs 00:14:43.578 ************************************ 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.578 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:43.578 ************************************ 00:14:43.578 START TEST filesystem_xfs 00:14:43.578 ************************************ 00:14:43.578 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:14:43.578 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:43.578 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:43.578 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:43.578 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:14:43.578 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:43.578 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:14:43.578 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:14:43.578 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:14:43.578 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:14:43.578 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:44.171 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:44.171 = sectsz=512 attr=2, projid32bit=1 00:14:44.171 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:44.171 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:44.171 data = bsize=4096 blocks=130560, imaxpct=25 00:14:44.171 = sunit=0 swidth=0 blks 00:14:44.171 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:44.171 log =internal log bsize=4096 blocks=16384, version=2 00:14:44.171 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:44.171 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:45.107 Discarding blocks...Done. 00:14:45.107 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:14:45.107 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 571361 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:47.009 00:14:47.009 real 0m3.390s 00:14:47.009 user 0m0.022s 00:14:47.009 sys 0m0.078s 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:47.009 ************************************ 00:14:47.009 END TEST filesystem_xfs 00:14:47.009 ************************************ 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:47.009 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 571361 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 571361 ']' 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 571361 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 571361 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 571361' 00:14:47.269 killing process with pid 571361 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 571361 00:14:47.269 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 571361 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:47.617 00:14:47.617 real 0m20.069s 00:14:47.617 user 1m19.195s 00:14:47.617 sys 0m1.513s 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:47.617 ************************************ 00:14:47.617 END TEST nvmf_filesystem_no_in_capsule 00:14:47.617 ************************************ 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:47.617 ************************************ 00:14:47.617 START TEST nvmf_filesystem_in_capsule 00:14:47.617 ************************************ 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=574838 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 574838 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 574838 ']' 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.617 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:47.960 [2024-12-05 13:46:30.188066] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:14:47.960 [2024-12-05 13:46:30.188107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.960 [2024-12-05 13:46:30.265088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.960 [2024-12-05 13:46:30.307633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.960 [2024-12-05 13:46:30.307671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.960 [2024-12-05 13:46:30.307678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.960 [2024-12-05 13:46:30.307684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.960 [2024-12-05 13:46:30.307689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.960 [2024-12-05 13:46:30.309164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.960 [2024-12-05 13:46:30.309273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.960 [2024-12-05 13:46:30.309446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.960 [2024-12-05 13:46:30.309447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:47.960 [2024-12-05 13:46:30.447768] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.960 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:48.219 Malloc1 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:48.219 [2024-12-05 13:46:30.605551] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:14:48.219 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:14:48.220 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:48.220 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.220 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:48.220 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.220 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:48.220 { 00:14:48.220 "name": "Malloc1", 00:14:48.220 "aliases": [ 00:14:48.220 "a2327175-df1e-4ded-8196-f21897579f1f" 00:14:48.220 ], 00:14:48.220 "product_name": "Malloc disk", 00:14:48.220 "block_size": 512, 00:14:48.220 "num_blocks": 1048576, 00:14:48.220 "uuid": "a2327175-df1e-4ded-8196-f21897579f1f", 00:14:48.220 "assigned_rate_limits": { 00:14:48.220 "rw_ios_per_sec": 0, 00:14:48.220 "rw_mbytes_per_sec": 0, 00:14:48.220 "r_mbytes_per_sec": 0, 00:14:48.220 "w_mbytes_per_sec": 0 00:14:48.220 }, 00:14:48.220 "claimed": true, 00:14:48.220 "claim_type": "exclusive_write", 00:14:48.220 "zoned": false, 00:14:48.220 "supported_io_types": { 00:14:48.220 "read": true, 00:14:48.220 "write": true, 00:14:48.220 "unmap": true, 00:14:48.220 "flush": true, 00:14:48.220 "reset": true, 00:14:48.220 "nvme_admin": false, 00:14:48.220 "nvme_io": false, 00:14:48.220 "nvme_io_md": false, 00:14:48.220 "write_zeroes": true, 00:14:48.220 "zcopy": true, 00:14:48.220 "get_zone_info": false, 00:14:48.220 "zone_management": false, 00:14:48.220 "zone_append": false, 00:14:48.220 "compare": false, 00:14:48.220 "compare_and_write": false, 00:14:48.220 "abort": true, 00:14:48.220 "seek_hole": false, 00:14:48.220 "seek_data": false, 00:14:48.220 "copy": true, 00:14:48.220 "nvme_iov_md": false 00:14:48.220 }, 00:14:48.220 "memory_domains": [ 00:14:48.220 { 00:14:48.220 "dma_device_id": "system", 00:14:48.220 "dma_device_type": 1 00:14:48.220 }, 00:14:48.220 { 00:14:48.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.220 "dma_device_type": 2 00:14:48.220 } 00:14:48.220 ], 00:14:48.220 "driver_specific": {} 00:14:48.220 } 00:14:48.220 ]' 00:14:48.220 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:48.220 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:14:48.220 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:48.220 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:14:48.220 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:14:48.220 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:14:48.220 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:48.220 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:49.597 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:49.598 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:14:49.598 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:49.598 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:49.598 13:46:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:51.504 13:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:51.762 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:52.329 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:53.265 ************************************ 00:14:53.265 START TEST filesystem_in_capsule_ext4 00:14:53.265 ************************************ 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:14:53.265 13:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:53.265 mke2fs 1.47.0 (5-Feb-2023) 00:14:53.265 Discarding device blocks: 0/522240 done 00:14:53.265 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:53.265 Filesystem UUID: b708b84d-76c6-4baa-ad31-bb714129127d 00:14:53.265 Superblock backups stored on blocks: 00:14:53.265 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:53.265 00:14:53.265 Allocating group tables: 0/64 done 00:14:53.265 Writing inode tables: 0/64 done 00:14:53.523 Creating journal (8192 blocks): done 00:14:55.277 Writing superblocks and filesystem accounting information: 0/64 done 00:14:55.277 00:14:55.277 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:14:55.277 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 574838 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:01.836 00:15:01.836 real 0m8.224s 00:15:01.836 user 0m0.035s 00:15:01.836 sys 0m0.065s 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:01.836 ************************************ 00:15:01.836 END TEST filesystem_in_capsule_ext4 00:15:01.836 ************************************ 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.836 13:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:01.836 ************************************ 00:15:01.836 START TEST filesystem_in_capsule_btrfs 00:15:01.836 ************************************ 00:15:01.836 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:01.836 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:01.836 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:01.836 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:01.836 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:15:01.836 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:01.836 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:15:01.836 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:15:01.836 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:15:01.836 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:15:01.836 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:01.836 btrfs-progs v6.8.1 00:15:01.836 See https://btrfs.readthedocs.io for more information. 00:15:01.836 00:15:01.836 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:01.836 NOTE: several default settings have changed in version 5.15, please make sure 00:15:01.836 this does not affect your deployments: 00:15:01.836 - DUP for metadata (-m dup) 00:15:01.836 - enabled no-holes (-O no-holes) 00:15:01.836 - enabled free-space-tree (-R free-space-tree) 00:15:01.836 00:15:01.836 Label: (null) 00:15:01.836 UUID: 7ae1a144-5d6d-4018-975d-6dbd407d3eab 00:15:01.836 Node size: 16384 00:15:01.836 Sector size: 4096 (CPU page size: 4096) 00:15:01.836 Filesystem size: 510.00MiB 00:15:01.836 Block group profiles: 00:15:01.836 Data: single 8.00MiB 00:15:01.836 Metadata: DUP 32.00MiB 00:15:01.836 System: DUP 8.00MiB 00:15:01.836 SSD detected: yes 00:15:01.836 Zoned device: no 00:15:01.836 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:01.836 Checksum: crc32c 00:15:01.836 Number of devices: 1 00:15:01.836 Devices: 00:15:01.836 ID SIZE PATH 00:15:01.836 1 510.00MiB /dev/nvme0n1p1 00:15:01.836 00:15:01.836 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:15:01.836 13:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:02.772 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:02.772 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:15:02.772 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:02.772 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 574838 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:02.773 00:15:02.773 real 0m1.111s 00:15:02.773 user 0m0.036s 00:15:02.773 sys 0m0.104s 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:02.773 ************************************ 00:15:02.773 END TEST filesystem_in_capsule_btrfs 00:15:02.773 ************************************ 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:02.773 ************************************ 00:15:02.773 START TEST filesystem_in_capsule_xfs 00:15:02.773 ************************************ 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:15:02.773 13:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:02.773 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:02.773 = sectsz=512 attr=2, projid32bit=1 00:15:02.773 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:02.773 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:02.773 data = bsize=4096 blocks=130560, imaxpct=25 00:15:02.773 = sunit=0 swidth=0 blks 00:15:02.773 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:02.773 log =internal log bsize=4096 blocks=16384, version=2 00:15:02.773 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:02.773 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:03.706 Discarding blocks...Done. 00:15:03.706 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:15:03.706 13:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 574838 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:05.609 00:15:05.609 real 0m2.712s 00:15:05.609 user 0m0.022s 00:15:05.609 sys 0m0.077s 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:05.609 ************************************ 00:15:05.609 END TEST filesystem_in_capsule_xfs 00:15:05.609 ************************************ 00:15:05.609 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 574838 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 574838 ']' 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 574838 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.610 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 574838 00:15:05.869 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.869 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.869 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 574838' 00:15:05.869 killing process with pid 574838 00:15:05.869 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 574838 00:15:05.869 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 574838 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:06.129 00:15:06.129 real 0m18.387s 00:15:06.129 user 1m12.392s 00:15:06.129 sys 0m1.447s 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:06.129 ************************************ 00:15:06.129 END TEST nvmf_filesystem_in_capsule 00:15:06.129 ************************************ 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:06.129 rmmod nvme_tcp 00:15:06.129 rmmod nvme_fabrics 00:15:06.129 rmmod nvme_keyring 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:06.129 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:08.662 00:15:08.662 real 0m47.250s 00:15:08.662 user 2m33.696s 00:15:08.662 sys 0m7.650s 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:08.662 ************************************ 00:15:08.662 END TEST nvmf_filesystem 00:15:08.662 ************************************ 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:08.662 ************************************ 00:15:08.662 START TEST nvmf_target_discovery 00:15:08.662 ************************************ 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:08.662 * Looking for test storage... 00:15:08.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.662 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:08.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.663 --rc genhtml_branch_coverage=1 00:15:08.663 --rc genhtml_function_coverage=1 00:15:08.663 --rc genhtml_legend=1 00:15:08.663 --rc geninfo_all_blocks=1 00:15:08.663 --rc geninfo_unexecuted_blocks=1 00:15:08.663 00:15:08.663 ' 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:08.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.663 --rc genhtml_branch_coverage=1 00:15:08.663 --rc genhtml_function_coverage=1 00:15:08.663 --rc genhtml_legend=1 00:15:08.663 --rc geninfo_all_blocks=1 00:15:08.663 --rc geninfo_unexecuted_blocks=1 00:15:08.663 00:15:08.663 ' 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:08.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.663 --rc genhtml_branch_coverage=1 00:15:08.663 --rc genhtml_function_coverage=1 00:15:08.663 --rc genhtml_legend=1 00:15:08.663 --rc geninfo_all_blocks=1 00:15:08.663 --rc geninfo_unexecuted_blocks=1 00:15:08.663 00:15:08.663 ' 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:08.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.663 --rc genhtml_branch_coverage=1 00:15:08.663 --rc genhtml_function_coverage=1 00:15:08.663 --rc genhtml_legend=1 00:15:08.663 --rc geninfo_all_blocks=1 00:15:08.663 --rc geninfo_unexecuted_blocks=1 00:15:08.663 00:15:08.663 ' 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:08.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.663 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:08.663 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:08.664 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:15:08.664 13:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:15.228 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:15.228 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:15.228 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:15.229 Found net devices under 0000:86:00.0: cvl_0_0 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:15.229 Found net devices under 0000:86:00.1: cvl_0_1 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:15.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:15:15.229 00:15:15.229 --- 10.0.0.2 ping statistics --- 00:15:15.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.229 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:15:15.229 00:15:15.229 --- 10.0.0.1 ping statistics --- 00:15:15.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.229 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=581618 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 581618 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 581618 ']' 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.229 13:46:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.229 [2024-12-05 13:46:57.034534] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:15:15.229 [2024-12-05 13:46:57.034585] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.229 [2024-12-05 13:46:57.116456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.229 [2024-12-05 13:46:57.162178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.229 [2024-12-05 13:46:57.162217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.229 [2024-12-05 13:46:57.162225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.229 [2024-12-05 13:46:57.162232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.229 [2024-12-05 13:46:57.162237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.229 [2024-12-05 13:46:57.163839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.229 [2024-12-05 13:46:57.163946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.229 [2024-12-05 13:46:57.164052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.229 [2024-12-05 13:46:57.164053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 [2024-12-05 13:46:57.915896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 Null1 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 [2024-12-05 13:46:57.968509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 Null2 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 Null3 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 Null4 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.487 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:15.745 00:15:15.745 Discovery Log Number of Records 6, Generation counter 6 00:15:15.745 =====Discovery Log Entry 0====== 00:15:15.745 trtype: tcp 00:15:15.745 adrfam: ipv4 00:15:15.745 subtype: current discovery subsystem 00:15:15.745 treq: not required 00:15:15.745 portid: 0 00:15:15.745 trsvcid: 4420 00:15:15.745 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:15.745 traddr: 10.0.0.2 00:15:15.745 eflags: explicit discovery connections, duplicate discovery information 00:15:15.745 sectype: none 00:15:15.745 =====Discovery Log Entry 1====== 00:15:15.745 trtype: tcp 00:15:15.745 adrfam: ipv4 00:15:15.745 subtype: nvme subsystem 00:15:15.745 treq: not required 00:15:15.745 portid: 0 00:15:15.745 trsvcid: 4420 00:15:15.745 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:15.745 traddr: 10.0.0.2 00:15:15.745 eflags: none 00:15:15.745 sectype: none 00:15:15.745 =====Discovery Log Entry 2====== 00:15:15.745 trtype: tcp 00:15:15.745 adrfam: ipv4 00:15:15.745 subtype: nvme subsystem 00:15:15.745 treq: not required 00:15:15.745 portid: 0 00:15:15.745 trsvcid: 4420 00:15:15.745 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:15.745 traddr: 10.0.0.2 00:15:15.745 eflags: none 00:15:15.745 sectype: none 00:15:15.745 =====Discovery Log Entry 3====== 00:15:15.745 trtype: tcp 00:15:15.745 adrfam: ipv4 00:15:15.745 subtype: nvme subsystem 00:15:15.745 treq: not required 00:15:15.745 portid: 0 00:15:15.745 trsvcid: 4420 00:15:15.745 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:15.745 traddr: 10.0.0.2 00:15:15.745 eflags: none 00:15:15.745 sectype: none 00:15:15.745 =====Discovery Log Entry 4====== 00:15:15.745 trtype: tcp 00:15:15.745 adrfam: ipv4 00:15:15.745 subtype: nvme subsystem 00:15:15.745 treq: not required 00:15:15.745 portid: 0 00:15:15.745 trsvcid: 4420 00:15:15.745 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:15.745 traddr: 10.0.0.2 00:15:15.745 eflags: none 00:15:15.745 sectype: none 00:15:15.745 =====Discovery Log Entry 5====== 00:15:15.745 trtype: tcp 00:15:15.745 adrfam: ipv4 00:15:15.745 subtype: discovery subsystem referral 00:15:15.745 treq: not required 00:15:15.745 portid: 0 00:15:15.745 trsvcid: 4430 00:15:15.745 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:15.745 traddr: 10.0.0.2 00:15:15.745 eflags: none 00:15:15.745 sectype: none 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:15.745 Perform nvmf subsystem discovery via RPC 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.745 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.745 [ 00:15:15.745 { 00:15:15.745 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:15.745 "subtype": "Discovery", 00:15:15.745 "listen_addresses": [ 00:15:15.745 { 00:15:15.745 "trtype": "TCP", 00:15:15.745 "adrfam": "IPv4", 00:15:15.745 "traddr": "10.0.0.2", 00:15:15.745 "trsvcid": "4420" 00:15:15.745 } 00:15:15.745 ], 00:15:15.745 "allow_any_host": true, 00:15:15.745 "hosts": [] 00:15:15.745 }, 00:15:15.745 { 00:15:15.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.745 "subtype": "NVMe", 00:15:15.745 "listen_addresses": [ 00:15:15.745 { 00:15:15.745 "trtype": "TCP", 00:15:15.745 "adrfam": "IPv4", 00:15:15.745 "traddr": "10.0.0.2", 00:15:15.745 "trsvcid": "4420" 00:15:15.745 } 00:15:15.745 ], 00:15:15.745 "allow_any_host": true, 00:15:15.745 "hosts": [], 00:15:15.745 "serial_number": "SPDK00000000000001", 00:15:15.745 "model_number": "SPDK bdev Controller", 00:15:15.745 "max_namespaces": 32, 00:15:15.745 "min_cntlid": 1, 00:15:15.745 "max_cntlid": 65519, 00:15:15.745 "namespaces": [ 00:15:15.745 { 00:15:15.745 "nsid": 1, 00:15:15.745 "bdev_name": "Null1", 00:15:15.745 "name": "Null1", 00:15:15.745 "nguid": "4230032D692B4A51B639B3805020A7E2", 00:15:15.745 "uuid": "4230032d-692b-4a51-b639-b3805020a7e2" 00:15:15.745 } 00:15:15.745 ] 00:15:15.745 }, 00:15:15.745 { 00:15:15.745 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:15.745 "subtype": "NVMe", 00:15:15.745 "listen_addresses": [ 00:15:15.745 { 00:15:15.745 "trtype": "TCP", 00:15:15.745 "adrfam": "IPv4", 00:15:15.745 "traddr": "10.0.0.2", 00:15:15.745 "trsvcid": "4420" 00:15:15.745 } 00:15:15.745 ], 00:15:15.745 "allow_any_host": true, 00:15:15.745 "hosts": [], 00:15:15.745 "serial_number": "SPDK00000000000002", 00:15:15.745 "model_number": "SPDK bdev Controller", 00:15:15.745 "max_namespaces": 32, 00:15:15.745 "min_cntlid": 1, 00:15:15.746 "max_cntlid": 65519, 00:15:15.746 "namespaces": [ 00:15:15.746 { 00:15:15.746 "nsid": 1, 00:15:15.746 "bdev_name": "Null2", 00:15:15.746 "name": "Null2", 00:15:15.746 "nguid": "1956D0BEC0D74A448A6202C9EBB539E8", 00:15:15.746 "uuid": "1956d0be-c0d7-4a44-8a62-02c9ebb539e8" 00:15:15.746 } 00:15:15.746 ] 00:15:15.746 }, 00:15:15.746 { 00:15:15.746 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:15.746 "subtype": "NVMe", 00:15:15.746 "listen_addresses": [ 00:15:15.746 { 00:15:15.746 "trtype": "TCP", 00:15:15.746 "adrfam": "IPv4", 00:15:15.746 "traddr": "10.0.0.2", 00:15:15.746 "trsvcid": "4420" 00:15:15.746 } 00:15:15.746 ], 00:15:15.746 "allow_any_host": true, 00:15:15.746 "hosts": [], 00:15:15.746 "serial_number": "SPDK00000000000003", 00:15:15.746 "model_number": "SPDK bdev Controller", 00:15:15.746 "max_namespaces": 32, 00:15:15.746 "min_cntlid": 1, 00:15:15.746 "max_cntlid": 65519, 00:15:15.746 "namespaces": [ 00:15:15.746 { 00:15:15.746 "nsid": 1, 00:15:15.746 "bdev_name": "Null3", 00:15:15.746 "name": "Null3", 00:15:15.746 "nguid": "69229701F00C4FD4BB70CC05B441CACB", 00:15:15.746 "uuid": "69229701-f00c-4fd4-bb70-cc05b441cacb" 00:15:15.746 } 00:15:15.746 ] 00:15:15.746 }, 00:15:15.746 { 00:15:15.746 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:15.746 "subtype": "NVMe", 00:15:15.746 "listen_addresses": [ 00:15:15.746 { 00:15:15.746 "trtype": "TCP", 00:15:15.746 "adrfam": "IPv4", 00:15:15.746 "traddr": "10.0.0.2", 00:15:15.746 "trsvcid": "4420" 00:15:15.746 } 00:15:15.746 ], 00:15:15.746 "allow_any_host": true, 00:15:15.746 "hosts": [], 00:15:15.746 "serial_number": "SPDK00000000000004", 00:15:15.746 "model_number": "SPDK bdev Controller", 00:15:15.746 "max_namespaces": 32, 00:15:15.746 "min_cntlid": 1, 00:15:15.746 "max_cntlid": 65519, 00:15:15.746 "namespaces": [ 00:15:15.746 { 00:15:15.746 "nsid": 1, 00:15:15.746 "bdev_name": "Null4", 00:15:15.746 "name": "Null4", 00:15:15.746 "nguid": "FE3351F93D1D46E7BCF35E2D528FB92F", 00:15:15.746 "uuid": "fe3351f9-3d1d-46e7-bcf3-5e2d528fb92f" 00:15:15.746 } 00:15:15.746 ] 00:15:15.746 } 00:15:15.746 ] 00:15:15.746 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:16.003 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:16.003 rmmod nvme_tcp 00:15:16.003 rmmod nvme_fabrics 00:15:16.003 rmmod nvme_keyring 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 581618 ']' 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 581618 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 581618 ']' 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 581618 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 581618 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 581618' 00:15:16.004 killing process with pid 581618 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 581618 00:15:16.004 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 581618 00:15:16.263 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:16.263 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:16.263 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:16.263 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:15:16.263 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:15:16.263 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:16.263 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:15:16.263 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:16.263 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:16.263 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.263 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.263 13:46:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.799 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:18.799 00:15:18.799 real 0m10.022s 00:15:18.799 user 0m8.264s 00:15:18.799 sys 0m4.930s 00:15:18.799 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.799 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.799 ************************************ 00:15:18.799 END TEST nvmf_target_discovery 00:15:18.799 ************************************ 00:15:18.799 13:47:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:18.799 13:47:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:18.799 13:47:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.799 13:47:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:18.799 ************************************ 00:15:18.799 START TEST nvmf_referrals 00:15:18.799 ************************************ 00:15:18.799 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:18.799 * Looking for test storage... 00:15:18.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:18.799 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:18.799 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:15:18.799 13:47:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:18.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.799 --rc genhtml_branch_coverage=1 00:15:18.799 --rc genhtml_function_coverage=1 00:15:18.799 --rc genhtml_legend=1 00:15:18.799 --rc geninfo_all_blocks=1 00:15:18.799 --rc geninfo_unexecuted_blocks=1 00:15:18.799 00:15:18.799 ' 00:15:18.799 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:18.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.799 --rc genhtml_branch_coverage=1 00:15:18.799 --rc genhtml_function_coverage=1 00:15:18.799 --rc genhtml_legend=1 00:15:18.799 --rc geninfo_all_blocks=1 00:15:18.799 --rc geninfo_unexecuted_blocks=1 00:15:18.799 00:15:18.799 ' 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:18.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.800 --rc genhtml_branch_coverage=1 00:15:18.800 --rc genhtml_function_coverage=1 00:15:18.800 --rc genhtml_legend=1 00:15:18.800 --rc geninfo_all_blocks=1 00:15:18.800 --rc geninfo_unexecuted_blocks=1 00:15:18.800 00:15:18.800 ' 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:18.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.800 --rc genhtml_branch_coverage=1 00:15:18.800 --rc genhtml_function_coverage=1 00:15:18.800 --rc genhtml_legend=1 00:15:18.800 --rc geninfo_all_blocks=1 00:15:18.800 --rc geninfo_unexecuted_blocks=1 00:15:18.800 00:15:18.800 ' 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:18.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:15:18.800 13:47:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.370 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.370 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:15:25.370 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:25.370 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:25.370 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:25.371 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:25.371 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:25.371 Found net devices under 0000:86:00.0: cvl_0_0 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:25.371 Found net devices under 0000:86:00.1: cvl_0_1 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:25.371 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.372 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.372 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:25.372 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:25.372 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.372 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.372 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.372 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.372 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:25.372 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.372 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.372 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.372 13:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:25.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:15:25.372 00:15:25.372 --- 10.0.0.2 ping statistics --- 00:15:25.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.372 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:15:25.372 00:15:25.372 --- 10.0.0.1 ping statistics --- 00:15:25.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.372 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=585534 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 585534 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 585534 ']' 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.372 [2024-12-05 13:47:07.115843] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:15:25.372 [2024-12-05 13:47:07.115891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.372 [2024-12-05 13:47:07.195059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.372 [2024-12-05 13:47:07.237254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.372 [2024-12-05 13:47:07.237290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.372 [2024-12-05 13:47:07.237297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.372 [2024-12-05 13:47:07.237306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.372 [2024-12-05 13:47:07.237310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.372 [2024-12-05 13:47:07.238779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.372 [2024-12-05 13:47:07.238819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.372 [2024-12-05 13:47:07.238925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.372 [2024-12-05 13:47:07.238926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.372 [2024-12-05 13:47:07.377114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.372 [2024-12-05 13:47:07.405528] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.372 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:25.373 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:25.631 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:25.631 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:15:25.631 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:15:25.631 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.631 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.631 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.631 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:25.631 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.631 13:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.631 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.631 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:15:25.631 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:25.631 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:25.631 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:25.631 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.631 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:25.631 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:25.631 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.631 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:15:25.632 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:25.632 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:15:25.632 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:25.632 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:25.632 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:25.632 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:25.632 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:25.890 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:15:25.890 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:25.890 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:15:25.890 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:15:25.890 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:25.890 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:25.890 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:25.890 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:25.890 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:15:25.890 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:15:25.890 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:25.890 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:25.890 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:26.148 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:26.148 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:26.148 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.148 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:26.148 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.148 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:15:26.148 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:26.148 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:26.148 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:26.148 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:26.148 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.148 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:26.148 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:26.407 13:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:26.666 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:26.924 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:26.924 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:15:26.924 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:15:26.924 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:15:26.924 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:26.924 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:15:26.924 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:26.924 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:15:26.924 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:26.924 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:26.924 rmmod nvme_tcp 00:15:26.924 rmmod nvme_fabrics 00:15:26.924 rmmod nvme_keyring 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 585534 ']' 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 585534 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 585534 ']' 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 585534 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 585534 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 585534' 00:15:27.183 killing process with pid 585534 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 585534 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 585534 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.183 13:47:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.716 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:29.716 00:15:29.716 real 0m10.934s 00:15:29.716 user 0m12.507s 00:15:29.716 sys 0m5.254s 00:15:29.716 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.716 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:29.716 ************************************ 00:15:29.716 END TEST nvmf_referrals 00:15:29.716 ************************************ 00:15:29.716 13:47:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:29.716 13:47:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:29.716 13:47:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.716 13:47:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:29.716 ************************************ 00:15:29.716 START TEST nvmf_connect_disconnect 00:15:29.716 ************************************ 00:15:29.716 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:29.716 * Looking for test storage... 00:15:29.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.716 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:29.716 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:15:29.716 13:47:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:29.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.716 --rc genhtml_branch_coverage=1 00:15:29.716 --rc genhtml_function_coverage=1 00:15:29.716 --rc genhtml_legend=1 00:15:29.716 --rc geninfo_all_blocks=1 00:15:29.716 --rc geninfo_unexecuted_blocks=1 00:15:29.716 00:15:29.716 ' 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:29.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.716 --rc genhtml_branch_coverage=1 00:15:29.716 --rc genhtml_function_coverage=1 00:15:29.716 --rc genhtml_legend=1 00:15:29.716 --rc geninfo_all_blocks=1 00:15:29.716 --rc geninfo_unexecuted_blocks=1 00:15:29.716 00:15:29.716 ' 00:15:29.716 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:29.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.716 --rc genhtml_branch_coverage=1 00:15:29.716 --rc genhtml_function_coverage=1 00:15:29.716 --rc genhtml_legend=1 00:15:29.716 --rc geninfo_all_blocks=1 00:15:29.717 --rc geninfo_unexecuted_blocks=1 00:15:29.717 00:15:29.717 ' 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:29.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.717 --rc genhtml_branch_coverage=1 00:15:29.717 --rc genhtml_function_coverage=1 00:15:29.717 --rc genhtml_legend=1 00:15:29.717 --rc geninfo_all_blocks=1 00:15:29.717 --rc geninfo_unexecuted_blocks=1 00:15:29.717 00:15:29.717 ' 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:15:29.717 13:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.325 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:36.326 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:36.326 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:36.326 Found net devices under 0000:86:00.0: cvl_0_0 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:36.326 Found net devices under 0000:86:00.1: cvl_0_1 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.326 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.327 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.327 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:36.327 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:36.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:15:36.327 00:15:36.327 --- 10.0.0.2 ping statistics --- 00:15:36.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.327 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:15:36.327 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:15:36.327 00:15:36.327 --- 10.0.0.1 ping statistics --- 00:15:36.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.327 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:15:36.327 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.327 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:15:36.327 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:36.327 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.327 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:36.327 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:36.327 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.327 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:36.327 13:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=589455 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 589455 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 589455 ']' 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:36.327 [2024-12-05 13:47:18.094125] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:15:36.327 [2024-12-05 13:47:18.094176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.327 [2024-12-05 13:47:18.174424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.327 [2024-12-05 13:47:18.216338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.327 [2024-12-05 13:47:18.216381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.327 [2024-12-05 13:47:18.216388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.327 [2024-12-05 13:47:18.216394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.327 [2024-12-05 13:47:18.216399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.327 [2024-12-05 13:47:18.217905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.327 [2024-12-05 13:47:18.218008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.327 [2024-12-05 13:47:18.218114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.327 [2024-12-05 13:47:18.218115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:36.327 [2024-12-05 13:47:18.355174] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:36.327 [2024-12-05 13:47:18.418181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.327 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:15:36.328 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:15:36.328 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:39.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:52.819 rmmod nvme_tcp 00:15:52.819 rmmod nvme_fabrics 00:15:52.819 rmmod nvme_keyring 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 589455 ']' 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 589455 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 589455 ']' 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 589455 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 589455 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 589455' 00:15:52.819 killing process with pid 589455 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 589455 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 589455 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.819 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:54.766 00:15:54.766 real 0m25.145s 00:15:54.766 user 1m8.155s 00:15:54.766 sys 0m5.817s 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:54.766 ************************************ 00:15:54.766 END TEST nvmf_connect_disconnect 00:15:54.766 ************************************ 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.766 ************************************ 00:15:54.766 START TEST nvmf_multitarget 00:15:54.766 ************************************ 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:54.766 * Looking for test storage... 00:15:54.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.766 --rc genhtml_branch_coverage=1 00:15:54.766 --rc genhtml_function_coverage=1 00:15:54.766 --rc genhtml_legend=1 00:15:54.766 --rc geninfo_all_blocks=1 00:15:54.766 --rc geninfo_unexecuted_blocks=1 00:15:54.766 00:15:54.766 ' 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.766 --rc genhtml_branch_coverage=1 00:15:54.766 --rc genhtml_function_coverage=1 00:15:54.766 --rc genhtml_legend=1 00:15:54.766 --rc geninfo_all_blocks=1 00:15:54.766 --rc geninfo_unexecuted_blocks=1 00:15:54.766 00:15:54.766 ' 00:15:54.766 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.766 --rc genhtml_branch_coverage=1 00:15:54.766 --rc genhtml_function_coverage=1 00:15:54.767 --rc genhtml_legend=1 00:15:54.767 --rc geninfo_all_blocks=1 00:15:54.767 --rc geninfo_unexecuted_blocks=1 00:15:54.767 00:15:54.767 ' 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:54.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.767 --rc genhtml_branch_coverage=1 00:15:54.767 --rc genhtml_function_coverage=1 00:15:54.767 --rc genhtml_legend=1 00:15:54.767 --rc geninfo_all_blocks=1 00:15:54.767 --rc geninfo_unexecuted_blocks=1 00:15:54.767 00:15:54.767 ' 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:54.767 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:01.335 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.335 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:01.335 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:01.335 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:01.335 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:01.336 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:01.336 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:01.336 Found net devices under 0000:86:00.0: cvl_0_0 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:01.336 Found net devices under 0000:86:00.1: cvl_0_1 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:01.336 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.336 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.336 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.336 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.336 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:01.336 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.336 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.336 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.336 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:01.336 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:01.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:16:01.336 00:16:01.336 --- 10.0.0.2 ping statistics --- 00:16:01.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.336 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:16:01.336 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:16:01.336 00:16:01.336 --- 10.0.0.1 ping statistics --- 00:16:01.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.336 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:16:01.336 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.336 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:01.336 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=595834 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 595834 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 595834 ']' 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:01.337 [2024-12-05 13:47:43.338394] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:16:01.337 [2024-12-05 13:47:43.338450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.337 [2024-12-05 13:47:43.420174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.337 [2024-12-05 13:47:43.462789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.337 [2024-12-05 13:47:43.462824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.337 [2024-12-05 13:47:43.462831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.337 [2024-12-05 13:47:43.462837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.337 [2024-12-05 13:47:43.462842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.337 [2024-12-05 13:47:43.464361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.337 [2024-12-05 13:47:43.464474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.337 [2024-12-05 13:47:43.464508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.337 [2024-12-05 13:47:43.464508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:01.337 "nvmf_tgt_1" 00:16:01.337 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:01.337 "nvmf_tgt_2" 00:16:01.594 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:01.594 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:01.594 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:01.594 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:01.594 true 00:16:01.594 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:01.851 true 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:01.851 rmmod nvme_tcp 00:16:01.851 rmmod nvme_fabrics 00:16:01.851 rmmod nvme_keyring 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 595834 ']' 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 595834 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 595834 ']' 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 595834 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.851 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 595834 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 595834' 00:16:02.110 killing process with pid 595834 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 595834 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 595834 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.110 13:47:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.640 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:04.640 00:16:04.640 real 0m9.591s 00:16:04.640 user 0m7.189s 00:16:04.640 sys 0m4.875s 00:16:04.640 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.640 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:04.640 ************************************ 00:16:04.640 END TEST nvmf_multitarget 00:16:04.640 ************************************ 00:16:04.640 13:47:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:04.640 13:47:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:04.640 13:47:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.640 13:47:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.640 ************************************ 00:16:04.640 START TEST nvmf_rpc 00:16:04.640 ************************************ 00:16:04.640 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:04.640 * Looking for test storage... 00:16:04.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.640 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:04.640 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:04.640 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:04.640 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:04.640 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:04.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.641 --rc genhtml_branch_coverage=1 00:16:04.641 --rc genhtml_function_coverage=1 00:16:04.641 --rc genhtml_legend=1 00:16:04.641 --rc geninfo_all_blocks=1 00:16:04.641 --rc geninfo_unexecuted_blocks=1 00:16:04.641 00:16:04.641 ' 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:04.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.641 --rc genhtml_branch_coverage=1 00:16:04.641 --rc genhtml_function_coverage=1 00:16:04.641 --rc genhtml_legend=1 00:16:04.641 --rc geninfo_all_blocks=1 00:16:04.641 --rc geninfo_unexecuted_blocks=1 00:16:04.641 00:16:04.641 ' 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:04.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.641 --rc genhtml_branch_coverage=1 00:16:04.641 --rc genhtml_function_coverage=1 00:16:04.641 --rc genhtml_legend=1 00:16:04.641 --rc geninfo_all_blocks=1 00:16:04.641 --rc geninfo_unexecuted_blocks=1 00:16:04.641 00:16:04.641 ' 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:04.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.641 --rc genhtml_branch_coverage=1 00:16:04.641 --rc genhtml_function_coverage=1 00:16:04.641 --rc genhtml_legend=1 00:16:04.641 --rc geninfo_all_blocks=1 00:16:04.641 --rc geninfo_unexecuted_blocks=1 00:16:04.641 00:16:04.641 ' 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:04.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:04.641 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:04.642 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:04.642 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.208 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:11.208 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:11.208 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:11.208 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:11.208 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:11.208 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:11.208 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:11.208 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:11.209 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:11.209 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:11.209 Found net devices under 0000:86:00.0: cvl_0_0 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:11.209 Found net devices under 0000:86:00.1: cvl_0_1 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:11.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:16:11.209 00:16:11.209 --- 10.0.0.2 ping statistics --- 00:16:11.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.209 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:16:11.209 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:11.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:16:11.210 00:16:11.210 --- 10.0.0.1 ping statistics --- 00:16:11.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.210 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=599619 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 599619 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 599619 ']' 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.210 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.210 [2024-12-05 13:47:53.007729] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:16:11.210 [2024-12-05 13:47:53.007777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.210 [2024-12-05 13:47:53.086785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.210 [2024-12-05 13:47:53.129186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.210 [2024-12-05 13:47:53.129221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.210 [2024-12-05 13:47:53.129228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.210 [2024-12-05 13:47:53.129235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.210 [2024-12-05 13:47:53.129241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.210 [2024-12-05 13:47:53.130757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.210 [2024-12-05 13:47:53.130864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.210 [2024-12-05 13:47:53.130969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.210 [2024-12-05 13:47:53.130970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:11.210 "tick_rate": 2100000000, 00:16:11.210 "poll_groups": [ 00:16:11.210 { 00:16:11.210 "name": "nvmf_tgt_poll_group_000", 00:16:11.210 "admin_qpairs": 0, 00:16:11.210 "io_qpairs": 0, 00:16:11.210 "current_admin_qpairs": 0, 00:16:11.210 "current_io_qpairs": 0, 00:16:11.210 "pending_bdev_io": 0, 00:16:11.210 "completed_nvme_io": 0, 00:16:11.210 "transports": [] 00:16:11.210 }, 00:16:11.210 { 00:16:11.210 "name": "nvmf_tgt_poll_group_001", 00:16:11.210 "admin_qpairs": 0, 00:16:11.210 "io_qpairs": 0, 00:16:11.210 "current_admin_qpairs": 0, 00:16:11.210 "current_io_qpairs": 0, 00:16:11.210 "pending_bdev_io": 0, 00:16:11.210 "completed_nvme_io": 0, 00:16:11.210 "transports": [] 00:16:11.210 }, 00:16:11.210 { 00:16:11.210 "name": "nvmf_tgt_poll_group_002", 00:16:11.210 "admin_qpairs": 0, 00:16:11.210 "io_qpairs": 0, 00:16:11.210 "current_admin_qpairs": 0, 00:16:11.210 "current_io_qpairs": 0, 00:16:11.210 "pending_bdev_io": 0, 00:16:11.210 "completed_nvme_io": 0, 00:16:11.210 "transports": [] 00:16:11.210 }, 00:16:11.210 { 00:16:11.210 "name": "nvmf_tgt_poll_group_003", 00:16:11.210 "admin_qpairs": 0, 00:16:11.210 "io_qpairs": 0, 00:16:11.210 "current_admin_qpairs": 0, 00:16:11.210 "current_io_qpairs": 0, 00:16:11.210 "pending_bdev_io": 0, 00:16:11.210 "completed_nvme_io": 0, 00:16:11.210 "transports": [] 00:16:11.210 } 00:16:11.210 ] 00:16:11.210 }' 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.210 [2024-12-05 13:47:53.373434] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:11.210 "tick_rate": 2100000000, 00:16:11.210 "poll_groups": [ 00:16:11.210 { 00:16:11.210 "name": "nvmf_tgt_poll_group_000", 00:16:11.210 "admin_qpairs": 0, 00:16:11.210 "io_qpairs": 0, 00:16:11.210 "current_admin_qpairs": 0, 00:16:11.210 "current_io_qpairs": 0, 00:16:11.210 "pending_bdev_io": 0, 00:16:11.210 "completed_nvme_io": 0, 00:16:11.210 "transports": [ 00:16:11.210 { 00:16:11.210 "trtype": "TCP" 00:16:11.210 } 00:16:11.210 ] 00:16:11.210 }, 00:16:11.210 { 00:16:11.210 "name": "nvmf_tgt_poll_group_001", 00:16:11.210 "admin_qpairs": 0, 00:16:11.210 "io_qpairs": 0, 00:16:11.210 "current_admin_qpairs": 0, 00:16:11.210 "current_io_qpairs": 0, 00:16:11.210 "pending_bdev_io": 0, 00:16:11.210 "completed_nvme_io": 0, 00:16:11.210 "transports": [ 00:16:11.210 { 00:16:11.210 "trtype": "TCP" 00:16:11.210 } 00:16:11.210 ] 00:16:11.210 }, 00:16:11.210 { 00:16:11.210 "name": "nvmf_tgt_poll_group_002", 00:16:11.210 "admin_qpairs": 0, 00:16:11.210 "io_qpairs": 0, 00:16:11.210 "current_admin_qpairs": 0, 00:16:11.210 "current_io_qpairs": 0, 00:16:11.210 "pending_bdev_io": 0, 00:16:11.210 "completed_nvme_io": 0, 00:16:11.210 "transports": [ 00:16:11.210 { 00:16:11.210 "trtype": "TCP" 00:16:11.210 } 00:16:11.210 ] 00:16:11.210 }, 00:16:11.210 { 00:16:11.210 "name": "nvmf_tgt_poll_group_003", 00:16:11.210 "admin_qpairs": 0, 00:16:11.210 "io_qpairs": 0, 00:16:11.210 "current_admin_qpairs": 0, 00:16:11.210 "current_io_qpairs": 0, 00:16:11.210 "pending_bdev_io": 0, 00:16:11.210 "completed_nvme_io": 0, 00:16:11.210 "transports": [ 00:16:11.210 { 00:16:11.210 "trtype": "TCP" 00:16:11.210 } 00:16:11.210 ] 00:16:11.210 } 00:16:11.210 ] 00:16:11.210 }' 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:11.210 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.211 Malloc1 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.211 [2024-12-05 13:47:53.553713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:11.211 [2024-12-05 13:47:53.582318] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:16:11.211 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:11.211 could not add new controller: failed to write to nvme-fabrics device 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.211 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:12.141 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:12.141 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:12.141 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.141 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:12.141 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:14.663 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:14.664 [2024-12-05 13:47:56.856219] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:16:14.664 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:14.664 could not add new controller: failed to write to nvme-fabrics device 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.664 13:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:15.596 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:15.596 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:15.596 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.596 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:15.596 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:18.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.121 [2024-12-05 13:48:00.236340] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.121 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:19.051 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:19.051 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:19.051 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.051 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:19.051 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:20.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.944 [2024-12-05 13:48:03.507789] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.944 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:22.319 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:22.319 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:22.319 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.319 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:22.319 13:48:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.220 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.479 [2024-12-05 13:48:06.826517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.479 13:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.414 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:25.415 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:25.415 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:25.415 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:25.415 13:48:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:27.946 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:27.946 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:27.946 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.946 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:27.946 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.946 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:27.946 13:48:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.946 [2024-12-05 13:48:10.227936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.946 13:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:28.879 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:28.879 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:28.879 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.879 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:28.879 13:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:30.780 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:30.780 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:30.780 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.039 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.040 [2024-12-05 13:48:13.539642] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.040 13:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:32.415 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:32.415 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:32.415 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:32.415 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:32.415 13:48:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:34.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.314 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.314 [2024-12-05 13:48:16.898545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.571 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 [2024-12-05 13:48:16.946630] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 [2024-12-05 13:48:16.994780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 [2024-12-05 13:48:17.042945] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 [2024-12-05 13:48:17.091098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.572 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.573 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.573 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:34.573 "tick_rate": 2100000000, 00:16:34.573 "poll_groups": [ 00:16:34.573 { 00:16:34.573 "name": "nvmf_tgt_poll_group_000", 00:16:34.573 "admin_qpairs": 2, 00:16:34.573 "io_qpairs": 168, 00:16:34.573 "current_admin_qpairs": 0, 00:16:34.573 "current_io_qpairs": 0, 00:16:34.573 "pending_bdev_io": 0, 00:16:34.573 "completed_nvme_io": 273, 00:16:34.573 "transports": [ 00:16:34.573 { 00:16:34.573 "trtype": "TCP" 00:16:34.573 } 00:16:34.573 ] 00:16:34.573 }, 00:16:34.573 { 00:16:34.573 "name": "nvmf_tgt_poll_group_001", 00:16:34.573 "admin_qpairs": 2, 00:16:34.573 "io_qpairs": 168, 00:16:34.573 "current_admin_qpairs": 0, 00:16:34.573 "current_io_qpairs": 0, 00:16:34.573 "pending_bdev_io": 0, 00:16:34.573 "completed_nvme_io": 284, 00:16:34.573 "transports": [ 00:16:34.573 { 00:16:34.573 "trtype": "TCP" 00:16:34.573 } 00:16:34.573 ] 00:16:34.573 }, 00:16:34.573 { 00:16:34.573 "name": "nvmf_tgt_poll_group_002", 00:16:34.573 "admin_qpairs": 1, 00:16:34.573 "io_qpairs": 168, 00:16:34.573 "current_admin_qpairs": 0, 00:16:34.573 "current_io_qpairs": 0, 00:16:34.573 "pending_bdev_io": 0, 00:16:34.573 "completed_nvme_io": 245, 00:16:34.573 "transports": [ 00:16:34.573 { 00:16:34.573 "trtype": "TCP" 00:16:34.573 } 00:16:34.573 ] 00:16:34.573 }, 00:16:34.573 { 00:16:34.573 "name": "nvmf_tgt_poll_group_003", 00:16:34.573 "admin_qpairs": 2, 00:16:34.573 "io_qpairs": 168, 00:16:34.573 "current_admin_qpairs": 0, 00:16:34.573 "current_io_qpairs": 0, 00:16:34.573 "pending_bdev_io": 0, 00:16:34.573 "completed_nvme_io": 220, 00:16:34.573 "transports": [ 00:16:34.573 { 00:16:34.573 "trtype": "TCP" 00:16:34.573 } 00:16:34.573 ] 00:16:34.573 } 00:16:34.573 ] 00:16:34.573 }' 00:16:34.573 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:34.573 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:34.573 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:34.573 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:34.831 rmmod nvme_tcp 00:16:34.831 rmmod nvme_fabrics 00:16:34.831 rmmod nvme_keyring 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 599619 ']' 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 599619 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 599619 ']' 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 599619 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 599619 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 599619' 00:16:34.831 killing process with pid 599619 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 599619 00:16:34.831 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 599619 00:16:35.089 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:35.089 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:35.089 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:35.089 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:35.089 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:35.090 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:35.090 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:35.090 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.090 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:35.090 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.090 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.090 13:48:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:37.620 00:16:37.620 real 0m32.830s 00:16:37.620 user 1m38.950s 00:16:37.620 sys 0m6.434s 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.620 ************************************ 00:16:37.620 END TEST nvmf_rpc 00:16:37.620 ************************************ 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:37.620 ************************************ 00:16:37.620 START TEST nvmf_invalid 00:16:37.620 ************************************ 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:37.620 * Looking for test storage... 00:16:37.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:37.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.620 --rc genhtml_branch_coverage=1 00:16:37.620 --rc genhtml_function_coverage=1 00:16:37.620 --rc genhtml_legend=1 00:16:37.620 --rc geninfo_all_blocks=1 00:16:37.620 --rc geninfo_unexecuted_blocks=1 00:16:37.620 00:16:37.620 ' 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:37.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.620 --rc genhtml_branch_coverage=1 00:16:37.620 --rc genhtml_function_coverage=1 00:16:37.620 --rc genhtml_legend=1 00:16:37.620 --rc geninfo_all_blocks=1 00:16:37.620 --rc geninfo_unexecuted_blocks=1 00:16:37.620 00:16:37.620 ' 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:37.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.620 --rc genhtml_branch_coverage=1 00:16:37.620 --rc genhtml_function_coverage=1 00:16:37.620 --rc genhtml_legend=1 00:16:37.620 --rc geninfo_all_blocks=1 00:16:37.620 --rc geninfo_unexecuted_blocks=1 00:16:37.620 00:16:37.620 ' 00:16:37.620 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:37.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.620 --rc genhtml_branch_coverage=1 00:16:37.620 --rc genhtml_function_coverage=1 00:16:37.621 --rc genhtml_legend=1 00:16:37.621 --rc geninfo_all_blocks=1 00:16:37.621 --rc geninfo_unexecuted_blocks=1 00:16:37.621 00:16:37.621 ' 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:37.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:37.621 13:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:44.190 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:44.190 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:44.190 Found net devices under 0000:86:00.0: cvl_0_0 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.190 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:44.190 Found net devices under 0000:86:00.1: cvl_0_1 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:44.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:16:44.191 00:16:44.191 --- 10.0.0.2 ping statistics --- 00:16:44.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.191 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:16:44.191 00:16:44.191 --- 10.0.0.1 ping statistics --- 00:16:44.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.191 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=607826 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 607826 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 607826 ']' 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.191 13:48:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:44.191 [2024-12-05 13:48:25.930643] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:16:44.191 [2024-12-05 13:48:25.930696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.191 [2024-12-05 13:48:26.013309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:44.191 [2024-12-05 13:48:26.057049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.191 [2024-12-05 13:48:26.057085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.191 [2024-12-05 13:48:26.057093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.191 [2024-12-05 13:48:26.057099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.191 [2024-12-05 13:48:26.057105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.191 [2024-12-05 13:48:26.058559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.191 [2024-12-05 13:48:26.058668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.191 [2024-12-05 13:48:26.058772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.191 [2024-12-05 13:48:26.058773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.191 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.191 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:44.191 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:44.191 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:44.191 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:44.192 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.192 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:44.192 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6391 00:16:44.192 [2024-12-05 13:48:26.365453] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:44.192 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:44.192 { 00:16:44.192 "nqn": "nqn.2016-06.io.spdk:cnode6391", 00:16:44.192 "tgt_name": "foobar", 00:16:44.192 "method": "nvmf_create_subsystem", 00:16:44.192 "req_id": 1 00:16:44.192 } 00:16:44.192 Got JSON-RPC error response 00:16:44.192 response: 00:16:44.192 { 00:16:44.192 "code": -32603, 00:16:44.192 "message": "Unable to find target foobar" 00:16:44.192 }' 00:16:44.192 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:44.192 { 00:16:44.192 "nqn": "nqn.2016-06.io.spdk:cnode6391", 00:16:44.192 "tgt_name": "foobar", 00:16:44.192 "method": "nvmf_create_subsystem", 00:16:44.192 "req_id": 1 00:16:44.192 } 00:16:44.192 Got JSON-RPC error response 00:16:44.192 response: 00:16:44.192 { 00:16:44.192 "code": -32603, 00:16:44.192 "message": "Unable to find target foobar" 00:16:44.192 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:44.192 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:44.192 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13223 00:16:44.192 [2024-12-05 13:48:26.566128] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13223: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:44.192 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:44.192 { 00:16:44.192 "nqn": "nqn.2016-06.io.spdk:cnode13223", 00:16:44.192 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:44.192 "method": "nvmf_create_subsystem", 00:16:44.192 "req_id": 1 00:16:44.192 } 00:16:44.192 Got JSON-RPC error response 00:16:44.192 response: 00:16:44.192 { 00:16:44.192 "code": -32602, 00:16:44.192 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:44.192 }' 00:16:44.192 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:44.192 { 00:16:44.192 "nqn": "nqn.2016-06.io.spdk:cnode13223", 00:16:44.192 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:44.192 "method": "nvmf_create_subsystem", 00:16:44.192 "req_id": 1 00:16:44.192 } 00:16:44.192 Got JSON-RPC error response 00:16:44.192 response: 00:16:44.192 { 00:16:44.192 "code": -32602, 00:16:44.192 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:44.192 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:44.192 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:44.192 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20229 00:16:44.192 [2024-12-05 13:48:26.766789] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20229: invalid model number 'SPDK_Controller' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:44.451 { 00:16:44.451 "nqn": "nqn.2016-06.io.spdk:cnode20229", 00:16:44.451 "model_number": "SPDK_Controller\u001f", 00:16:44.451 "method": "nvmf_create_subsystem", 00:16:44.451 "req_id": 1 00:16:44.451 } 00:16:44.451 Got JSON-RPC error response 00:16:44.451 response: 00:16:44.451 { 00:16:44.451 "code": -32602, 00:16:44.451 "message": "Invalid MN SPDK_Controller\u001f" 00:16:44.451 }' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:44.451 { 00:16:44.451 "nqn": "nqn.2016-06.io.spdk:cnode20229", 00:16:44.451 "model_number": "SPDK_Controller\u001f", 00:16:44.451 "method": "nvmf_create_subsystem", 00:16:44.451 "req_id": 1 00:16:44.451 } 00:16:44.451 Got JSON-RPC error response 00:16:44.451 response: 00:16:44.451 { 00:16:44.451 "code": -32602, 00:16:44.451 "message": "Invalid MN SPDK_Controller\u001f" 00:16:44.451 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.451 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'v-okR-XQ|)5~)8!RdOu%}' 00:16:44.452 13:48:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'v-okR-XQ|)5~)8!RdOu%}' nqn.2016-06.io.spdk:cnode22859 00:16:44.712 [2024-12-05 13:48:27.103888] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22859: invalid serial number 'v-okR-XQ|)5~)8!RdOu%}' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:44.712 { 00:16:44.712 "nqn": "nqn.2016-06.io.spdk:cnode22859", 00:16:44.712 "serial_number": "v-okR-XQ|)5~)8!RdOu%}", 00:16:44.712 "method": "nvmf_create_subsystem", 00:16:44.712 "req_id": 1 00:16:44.712 } 00:16:44.712 Got JSON-RPC error response 00:16:44.712 response: 00:16:44.712 { 00:16:44.712 "code": -32602, 00:16:44.712 "message": "Invalid SN v-okR-XQ|)5~)8!RdOu%}" 00:16:44.712 }' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:44.712 { 00:16:44.712 "nqn": "nqn.2016-06.io.spdk:cnode22859", 00:16:44.712 "serial_number": "v-okR-XQ|)5~)8!RdOu%}", 00:16:44.712 "method": "nvmf_create_subsystem", 00:16:44.712 "req_id": 1 00:16:44.712 } 00:16:44.712 Got JSON-RPC error response 00:16:44.712 response: 00:16:44.712 { 00:16:44.712 "code": -32602, 00:16:44.712 "message": "Invalid SN v-okR-XQ|)5~)8!RdOu%}" 00:16:44.712 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:44.712 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:44.713 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:44.972 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:44.973 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:44.973 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:16:44.973 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 't5Mlw<47q8fac%6Gto}~ig99jL&jSKs *d=*39*tf' 00:16:44.973 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 't5Mlw<47q8fac%6Gto}~ig99jL&jSKs *d=*39*tf' nqn.2016-06.io.spdk:cnode15680 00:16:45.232 [2024-12-05 13:48:27.577450] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15680: invalid model number 't5Mlw<47q8fac%6Gto}~ig99jL&jSKs *d=*39*tf' 00:16:45.232 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:45.232 { 00:16:45.232 "nqn": "nqn.2016-06.io.spdk:cnode15680", 00:16:45.232 "model_number": "t5Mlw<47q8fac%6Gto}~ig99jL&jSKs *d=*39*tf", 00:16:45.232 "method": "nvmf_create_subsystem", 00:16:45.232 "req_id": 1 00:16:45.232 } 00:16:45.232 Got JSON-RPC error response 00:16:45.232 response: 00:16:45.232 { 00:16:45.232 "code": -32602, 00:16:45.232 "message": "Invalid MN t5Mlw<47q8fac%6Gto}~ig99jL&jSKs *d=*39*tf" 00:16:45.232 }' 00:16:45.232 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:45.232 { 00:16:45.232 "nqn": "nqn.2016-06.io.spdk:cnode15680", 00:16:45.232 "model_number": "t5Mlw<47q8fac%6Gto}~ig99jL&jSKs *d=*39*tf", 00:16:45.232 "method": "nvmf_create_subsystem", 00:16:45.232 "req_id": 1 00:16:45.232 } 00:16:45.232 Got JSON-RPC error response 00:16:45.232 response: 00:16:45.232 { 00:16:45.232 "code": -32602, 00:16:45.232 "message": "Invalid MN t5Mlw<47q8fac%6Gto}~ig99jL&jSKs *d=*39*tf" 00:16:45.232 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:45.232 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:45.232 [2024-12-05 13:48:27.774174] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.232 13:48:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:45.490 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:45.490 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:45.490 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:45.490 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:45.490 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:45.747 [2024-12-05 13:48:28.187508] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:45.747 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:45.747 { 00:16:45.747 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:45.747 "listen_address": { 00:16:45.747 "trtype": "tcp", 00:16:45.747 "traddr": "", 00:16:45.747 "trsvcid": "4421" 00:16:45.747 }, 00:16:45.747 "method": "nvmf_subsystem_remove_listener", 00:16:45.747 "req_id": 1 00:16:45.747 } 00:16:45.747 Got JSON-RPC error response 00:16:45.747 response: 00:16:45.747 { 00:16:45.747 "code": -32602, 00:16:45.747 "message": "Invalid parameters" 00:16:45.747 }' 00:16:45.747 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:45.747 { 00:16:45.747 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:45.747 "listen_address": { 00:16:45.747 "trtype": "tcp", 00:16:45.747 "traddr": "", 00:16:45.747 "trsvcid": "4421" 00:16:45.747 }, 00:16:45.747 "method": "nvmf_subsystem_remove_listener", 00:16:45.747 "req_id": 1 00:16:45.747 } 00:16:45.747 Got JSON-RPC error response 00:16:45.747 response: 00:16:45.747 { 00:16:45.747 "code": -32602, 00:16:45.747 "message": "Invalid parameters" 00:16:45.747 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:45.747 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3225 -i 0 00:16:46.005 [2024-12-05 13:48:28.384107] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3225: invalid cntlid range [0-65519] 00:16:46.005 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:46.005 { 00:16:46.005 "nqn": "nqn.2016-06.io.spdk:cnode3225", 00:16:46.005 "min_cntlid": 0, 00:16:46.005 "method": "nvmf_create_subsystem", 00:16:46.005 "req_id": 1 00:16:46.005 } 00:16:46.005 Got JSON-RPC error response 00:16:46.005 response: 00:16:46.005 { 00:16:46.005 "code": -32602, 00:16:46.005 "message": "Invalid cntlid range [0-65519]" 00:16:46.005 }' 00:16:46.005 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:46.005 { 00:16:46.005 "nqn": "nqn.2016-06.io.spdk:cnode3225", 00:16:46.005 "min_cntlid": 0, 00:16:46.005 "method": "nvmf_create_subsystem", 00:16:46.005 "req_id": 1 00:16:46.005 } 00:16:46.005 Got JSON-RPC error response 00:16:46.005 response: 00:16:46.005 { 00:16:46.005 "code": -32602, 00:16:46.005 "message": "Invalid cntlid range [0-65519]" 00:16:46.005 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:46.005 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13605 -i 65520 00:16:46.005 [2024-12-05 13:48:28.588791] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13605: invalid cntlid range [65520-65519] 00:16:46.263 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:46.263 { 00:16:46.263 "nqn": "nqn.2016-06.io.spdk:cnode13605", 00:16:46.263 "min_cntlid": 65520, 00:16:46.263 "method": "nvmf_create_subsystem", 00:16:46.263 "req_id": 1 00:16:46.263 } 00:16:46.263 Got JSON-RPC error response 00:16:46.263 response: 00:16:46.263 { 00:16:46.263 "code": -32602, 00:16:46.263 "message": "Invalid cntlid range [65520-65519]" 00:16:46.263 }' 00:16:46.263 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:46.263 { 00:16:46.263 "nqn": "nqn.2016-06.io.spdk:cnode13605", 00:16:46.263 "min_cntlid": 65520, 00:16:46.263 "method": "nvmf_create_subsystem", 00:16:46.263 "req_id": 1 00:16:46.263 } 00:16:46.263 Got JSON-RPC error response 00:16:46.263 response: 00:16:46.263 { 00:16:46.263 "code": -32602, 00:16:46.263 "message": "Invalid cntlid range [65520-65519]" 00:16:46.263 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:46.263 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8405 -I 0 00:16:46.263 [2024-12-05 13:48:28.801494] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8405: invalid cntlid range [1-0] 00:16:46.263 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:46.263 { 00:16:46.263 "nqn": "nqn.2016-06.io.spdk:cnode8405", 00:16:46.263 "max_cntlid": 0, 00:16:46.263 "method": "nvmf_create_subsystem", 00:16:46.263 "req_id": 1 00:16:46.263 } 00:16:46.263 Got JSON-RPC error response 00:16:46.263 response: 00:16:46.263 { 00:16:46.263 "code": -32602, 00:16:46.263 "message": "Invalid cntlid range [1-0]" 00:16:46.263 }' 00:16:46.263 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:46.263 { 00:16:46.263 "nqn": "nqn.2016-06.io.spdk:cnode8405", 00:16:46.263 "max_cntlid": 0, 00:16:46.263 "method": "nvmf_create_subsystem", 00:16:46.263 "req_id": 1 00:16:46.263 } 00:16:46.263 Got JSON-RPC error response 00:16:46.263 response: 00:16:46.263 { 00:16:46.263 "code": -32602, 00:16:46.263 "message": "Invalid cntlid range [1-0]" 00:16:46.263 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:46.263 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1602 -I 65520 00:16:46.521 [2024-12-05 13:48:29.010181] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1602: invalid cntlid range [1-65520] 00:16:46.521 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:46.521 { 00:16:46.521 "nqn": "nqn.2016-06.io.spdk:cnode1602", 00:16:46.521 "max_cntlid": 65520, 00:16:46.521 "method": "nvmf_create_subsystem", 00:16:46.521 "req_id": 1 00:16:46.521 } 00:16:46.521 Got JSON-RPC error response 00:16:46.521 response: 00:16:46.521 { 00:16:46.521 "code": -32602, 00:16:46.521 "message": "Invalid cntlid range [1-65520]" 00:16:46.521 }' 00:16:46.521 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:46.521 { 00:16:46.521 "nqn": "nqn.2016-06.io.spdk:cnode1602", 00:16:46.521 "max_cntlid": 65520, 00:16:46.521 "method": "nvmf_create_subsystem", 00:16:46.521 "req_id": 1 00:16:46.521 } 00:16:46.521 Got JSON-RPC error response 00:16:46.521 response: 00:16:46.521 { 00:16:46.521 "code": -32602, 00:16:46.521 "message": "Invalid cntlid range [1-65520]" 00:16:46.521 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:46.521 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10657 -i 6 -I 5 00:16:46.779 [2024-12-05 13:48:29.210865] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10657: invalid cntlid range [6-5] 00:16:46.779 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:46.779 { 00:16:46.779 "nqn": "nqn.2016-06.io.spdk:cnode10657", 00:16:46.779 "min_cntlid": 6, 00:16:46.779 "max_cntlid": 5, 00:16:46.779 "method": "nvmf_create_subsystem", 00:16:46.779 "req_id": 1 00:16:46.779 } 00:16:46.779 Got JSON-RPC error response 00:16:46.779 response: 00:16:46.779 { 00:16:46.779 "code": -32602, 00:16:46.779 "message": "Invalid cntlid range [6-5]" 00:16:46.779 }' 00:16:46.779 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:46.779 { 00:16:46.779 "nqn": "nqn.2016-06.io.spdk:cnode10657", 00:16:46.780 "min_cntlid": 6, 00:16:46.780 "max_cntlid": 5, 00:16:46.780 "method": "nvmf_create_subsystem", 00:16:46.780 "req_id": 1 00:16:46.780 } 00:16:46.780 Got JSON-RPC error response 00:16:46.780 response: 00:16:46.780 { 00:16:46.780 "code": -32602, 00:16:46.780 "message": "Invalid cntlid range [6-5]" 00:16:46.780 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:46.780 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:46.780 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:46.780 { 00:16:46.780 "name": "foobar", 00:16:46.780 "method": "nvmf_delete_target", 00:16:46.780 "req_id": 1 00:16:46.780 } 00:16:46.780 Got JSON-RPC error response 00:16:46.780 response: 00:16:46.780 { 00:16:46.780 "code": -32602, 00:16:46.780 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:46.780 }' 00:16:46.780 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:46.780 { 00:16:46.780 "name": "foobar", 00:16:46.780 "method": "nvmf_delete_target", 00:16:46.780 "req_id": 1 00:16:46.780 } 00:16:46.780 Got JSON-RPC error response 00:16:46.780 response: 00:16:46.780 { 00:16:46.780 "code": -32602, 00:16:46.780 "message": "The specified target doesn't exist, cannot delete it." 00:16:46.780 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:46.780 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:46.780 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:46.780 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:46.780 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:46.780 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:46.780 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:46.780 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:46.780 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:46.780 rmmod nvme_tcp 00:16:47.037 rmmod nvme_fabrics 00:16:47.037 rmmod nvme_keyring 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 607826 ']' 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 607826 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 607826 ']' 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 607826 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 607826 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 607826' 00:16:47.037 killing process with pid 607826 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 607826 00:16:47.037 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 607826 00:16:47.296 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:47.296 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:47.296 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:47.296 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:47.296 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:16:47.296 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:47.296 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:16:47.296 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:47.296 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:47.296 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.296 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.296 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.202 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:49.202 00:16:49.202 real 0m12.020s 00:16:49.202 user 0m18.538s 00:16:49.202 sys 0m5.370s 00:16:49.202 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.202 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:49.202 ************************************ 00:16:49.202 END TEST nvmf_invalid 00:16:49.202 ************************************ 00:16:49.202 13:48:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:49.202 13:48:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:49.202 13:48:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.202 13:48:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:49.202 ************************************ 00:16:49.202 START TEST nvmf_connect_stress 00:16:49.202 ************************************ 00:16:49.202 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:49.463 * Looking for test storage... 00:16:49.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.463 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:49.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.464 --rc genhtml_branch_coverage=1 00:16:49.464 --rc genhtml_function_coverage=1 00:16:49.464 --rc genhtml_legend=1 00:16:49.464 --rc geninfo_all_blocks=1 00:16:49.464 --rc geninfo_unexecuted_blocks=1 00:16:49.464 00:16:49.464 ' 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:49.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.464 --rc genhtml_branch_coverage=1 00:16:49.464 --rc genhtml_function_coverage=1 00:16:49.464 --rc genhtml_legend=1 00:16:49.464 --rc geninfo_all_blocks=1 00:16:49.464 --rc geninfo_unexecuted_blocks=1 00:16:49.464 00:16:49.464 ' 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:49.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.464 --rc genhtml_branch_coverage=1 00:16:49.464 --rc genhtml_function_coverage=1 00:16:49.464 --rc genhtml_legend=1 00:16:49.464 --rc geninfo_all_blocks=1 00:16:49.464 --rc geninfo_unexecuted_blocks=1 00:16:49.464 00:16:49.464 ' 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:49.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.464 --rc genhtml_branch_coverage=1 00:16:49.464 --rc genhtml_function_coverage=1 00:16:49.464 --rc genhtml_legend=1 00:16:49.464 --rc geninfo_all_blocks=1 00:16:49.464 --rc geninfo_unexecuted_blocks=1 00:16:49.464 00:16:49.464 ' 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:49.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.464 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.465 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.465 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:49.465 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:49.465 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:49.465 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.219 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:56.219 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:56.219 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:56.219 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:56.219 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:56.219 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:56.219 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:56.219 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:56.220 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:56.220 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:56.220 Found net devices under 0000:86:00.0: cvl_0_0 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:56.220 Found net devices under 0000:86:00.1: cvl_0_1 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:56.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:16:56.220 00:16:56.220 --- 10.0.0.2 ping statistics --- 00:16:56.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.220 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:16:56.220 00:16:56.220 --- 10.0.0.1 ping statistics --- 00:16:56.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.220 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=612122 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 612122 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 612122 ']' 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.220 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.220 [2024-12-05 13:48:38.013904] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:16:56.220 [2024-12-05 13:48:38.013952] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.220 [2024-12-05 13:48:38.090999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:56.220 [2024-12-05 13:48:38.132967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.220 [2024-12-05 13:48:38.133003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.220 [2024-12-05 13:48:38.133010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.220 [2024-12-05 13:48:38.133016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.220 [2024-12-05 13:48:38.133021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.220 [2024-12-05 13:48:38.134471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.220 [2024-12-05 13:48:38.134577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.220 [2024-12-05 13:48:38.134578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.220 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.220 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:56.220 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:56.220 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:56.220 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.221 [2024-12-05 13:48:38.271924] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.221 [2024-12-05 13:48:38.292141] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.221 NULL1 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=612153 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.221 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.478 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.479 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:16:56.479 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.479 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.479 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.044 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.044 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:16:57.044 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.044 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.044 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.301 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.301 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:16:57.301 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.301 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.301 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.560 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.560 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:16:57.560 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.560 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.560 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.818 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.818 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:16:57.818 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.818 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.818 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.076 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.076 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:16:58.076 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.076 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.076 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.642 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.642 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:16:58.642 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.642 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.642 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.900 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.900 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:16:58.900 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.900 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.900 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.157 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.158 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:16:59.158 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.158 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.158 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.415 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.415 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:16:59.415 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.415 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.415 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.979 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.979 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:16:59.979 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.979 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.979 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.237 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.237 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:00.237 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.237 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.237 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.494 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.494 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:00.494 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.494 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.494 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.751 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.751 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:00.751 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.751 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.751 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.009 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.009 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:01.009 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.009 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.009 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.573 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.573 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:01.573 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.573 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.573 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.830 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.830 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:01.830 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.830 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.830 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.087 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.087 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:02.087 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.087 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.087 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.345 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.345 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:02.345 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.345 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.345 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.910 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.910 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:02.910 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.910 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.910 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.167 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.167 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:03.167 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.167 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.167 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.424 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.424 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:03.424 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.424 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.424 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.681 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.681 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:03.681 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.681 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.681 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.939 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.939 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:03.939 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.939 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.939 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.503 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.503 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:04.503 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.503 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.503 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.761 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.761 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:04.761 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.761 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.761 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.017 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.017 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:05.017 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.017 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.017 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.274 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.274 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:05.274 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.274 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.274 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.840 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.840 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:05.840 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.840 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.840 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.840 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 612153 00:17:06.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (612153) - No such process 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 612153 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:06.098 rmmod nvme_tcp 00:17:06.098 rmmod nvme_fabrics 00:17:06.098 rmmod nvme_keyring 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 612122 ']' 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 612122 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 612122 ']' 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 612122 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 612122 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:06.098 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 612122' 00:17:06.099 killing process with pid 612122 00:17:06.099 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 612122 00:17:06.099 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 612122 00:17:06.358 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:06.358 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:06.358 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:06.358 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:06.358 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:06.358 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:06.358 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:06.358 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:06.358 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:06.358 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.358 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.358 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.262 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:08.262 00:17:08.262 real 0m19.074s 00:17:08.262 user 0m39.373s 00:17:08.262 sys 0m8.628s 00:17:08.262 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:08.262 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.262 ************************************ 00:17:08.262 END TEST nvmf_connect_stress 00:17:08.262 ************************************ 00:17:08.521 13:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:08.521 13:48:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:08.521 13:48:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:08.521 13:48:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:08.521 ************************************ 00:17:08.521 START TEST nvmf_fused_ordering 00:17:08.521 ************************************ 00:17:08.521 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:08.521 * Looking for test storage... 00:17:08.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.521 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.521 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.522 --rc genhtml_branch_coverage=1 00:17:08.522 --rc genhtml_function_coverage=1 00:17:08.522 --rc genhtml_legend=1 00:17:08.522 --rc geninfo_all_blocks=1 00:17:08.522 --rc geninfo_unexecuted_blocks=1 00:17:08.522 00:17:08.522 ' 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.522 --rc genhtml_branch_coverage=1 00:17:08.522 --rc genhtml_function_coverage=1 00:17:08.522 --rc genhtml_legend=1 00:17:08.522 --rc geninfo_all_blocks=1 00:17:08.522 --rc geninfo_unexecuted_blocks=1 00:17:08.522 00:17:08.522 ' 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.522 --rc genhtml_branch_coverage=1 00:17:08.522 --rc genhtml_function_coverage=1 00:17:08.522 --rc genhtml_legend=1 00:17:08.522 --rc geninfo_all_blocks=1 00:17:08.522 --rc geninfo_unexecuted_blocks=1 00:17:08.522 00:17:08.522 ' 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.522 --rc genhtml_branch_coverage=1 00:17:08.522 --rc genhtml_function_coverage=1 00:17:08.522 --rc genhtml_legend=1 00:17:08.522 --rc geninfo_all_blocks=1 00:17:08.522 --rc geninfo_unexecuted_blocks=1 00:17:08.522 00:17:08.522 ' 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.522 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:08.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:08.782 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:15.352 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:15.352 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.352 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:15.353 Found net devices under 0000:86:00.0: cvl_0_0 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:15.353 Found net devices under 0000:86:00.1: cvl_0_1 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:15.353 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:15.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:17:15.353 00:17:15.353 --- 10.0.0.2 ping statistics --- 00:17:15.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.353 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:17:15.353 00:17:15.353 --- 10.0.0.1 ping statistics --- 00:17:15.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.353 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=617510 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 617510 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 617510 ']' 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:15.353 [2024-12-05 13:48:57.174241] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:15.353 [2024-12-05 13:48:57.174287] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.353 [2024-12-05 13:48:57.250030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.353 [2024-12-05 13:48:57.288808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.353 [2024-12-05 13:48:57.288843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.353 [2024-12-05 13:48:57.288851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.353 [2024-12-05 13:48:57.288857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.353 [2024-12-05 13:48:57.288862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.353 [2024-12-05 13:48:57.289437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:15.353 [2024-12-05 13:48:57.432647] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.353 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:15.354 [2024-12-05 13:48:57.452851] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:15.354 NULL1 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.354 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:15.354 [2024-12-05 13:48:57.510036] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:15.354 [2024-12-05 13:48:57.510069] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617543 ] 00:17:15.354 Attached to nqn.2016-06.io.spdk:cnode1 00:17:15.354 Namespace ID: 1 size: 1GB 00:17:15.354 fused_ordering(0) 00:17:15.354 fused_ordering(1) 00:17:15.354 fused_ordering(2) 00:17:15.354 fused_ordering(3) 00:17:15.354 fused_ordering(4) 00:17:15.354 fused_ordering(5) 00:17:15.354 fused_ordering(6) 00:17:15.354 fused_ordering(7) 00:17:15.354 fused_ordering(8) 00:17:15.354 fused_ordering(9) 00:17:15.354 fused_ordering(10) 00:17:15.354 fused_ordering(11) 00:17:15.354 fused_ordering(12) 00:17:15.354 fused_ordering(13) 00:17:15.354 fused_ordering(14) 00:17:15.354 fused_ordering(15) 00:17:15.354 fused_ordering(16) 00:17:15.354 fused_ordering(17) 00:17:15.354 fused_ordering(18) 00:17:15.354 fused_ordering(19) 00:17:15.354 fused_ordering(20) 00:17:15.354 fused_ordering(21) 00:17:15.354 fused_ordering(22) 00:17:15.354 fused_ordering(23) 00:17:15.354 fused_ordering(24) 00:17:15.354 fused_ordering(25) 00:17:15.354 fused_ordering(26) 00:17:15.354 fused_ordering(27) 00:17:15.354 fused_ordering(28) 00:17:15.354 fused_ordering(29) 00:17:15.354 fused_ordering(30) 00:17:15.354 fused_ordering(31) 00:17:15.354 fused_ordering(32) 00:17:15.354 fused_ordering(33) 00:17:15.354 fused_ordering(34) 00:17:15.354 fused_ordering(35) 00:17:15.354 fused_ordering(36) 00:17:15.354 fused_ordering(37) 00:17:15.354 fused_ordering(38) 00:17:15.354 fused_ordering(39) 00:17:15.354 fused_ordering(40) 00:17:15.354 fused_ordering(41) 00:17:15.354 fused_ordering(42) 00:17:15.354 fused_ordering(43) 00:17:15.354 fused_ordering(44) 00:17:15.354 fused_ordering(45) 00:17:15.354 fused_ordering(46) 00:17:15.354 fused_ordering(47) 00:17:15.354 fused_ordering(48) 00:17:15.354 fused_ordering(49) 00:17:15.354 fused_ordering(50) 00:17:15.354 fused_ordering(51) 00:17:15.354 fused_ordering(52) 00:17:15.354 fused_ordering(53) 00:17:15.354 fused_ordering(54) 00:17:15.354 fused_ordering(55) 00:17:15.354 fused_ordering(56) 00:17:15.354 fused_ordering(57) 00:17:15.354 fused_ordering(58) 00:17:15.354 fused_ordering(59) 00:17:15.354 fused_ordering(60) 00:17:15.354 fused_ordering(61) 00:17:15.354 fused_ordering(62) 00:17:15.354 fused_ordering(63) 00:17:15.354 fused_ordering(64) 00:17:15.354 fused_ordering(65) 00:17:15.354 fused_ordering(66) 00:17:15.354 fused_ordering(67) 00:17:15.354 fused_ordering(68) 00:17:15.354 fused_ordering(69) 00:17:15.354 fused_ordering(70) 00:17:15.354 fused_ordering(71) 00:17:15.354 fused_ordering(72) 00:17:15.354 fused_ordering(73) 00:17:15.354 fused_ordering(74) 00:17:15.354 fused_ordering(75) 00:17:15.354 fused_ordering(76) 00:17:15.354 fused_ordering(77) 00:17:15.354 fused_ordering(78) 00:17:15.354 fused_ordering(79) 00:17:15.354 fused_ordering(80) 00:17:15.354 fused_ordering(81) 00:17:15.354 fused_ordering(82) 00:17:15.354 fused_ordering(83) 00:17:15.354 fused_ordering(84) 00:17:15.354 fused_ordering(85) 00:17:15.354 fused_ordering(86) 00:17:15.354 fused_ordering(87) 00:17:15.354 fused_ordering(88) 00:17:15.354 fused_ordering(89) 00:17:15.354 fused_ordering(90) 00:17:15.354 fused_ordering(91) 00:17:15.354 fused_ordering(92) 00:17:15.354 fused_ordering(93) 00:17:15.354 fused_ordering(94) 00:17:15.354 fused_ordering(95) 00:17:15.354 fused_ordering(96) 00:17:15.354 fused_ordering(97) 00:17:15.354 fused_ordering(98) 00:17:15.354 fused_ordering(99) 00:17:15.354 fused_ordering(100) 00:17:15.354 fused_ordering(101) 00:17:15.354 fused_ordering(102) 00:17:15.354 fused_ordering(103) 00:17:15.354 fused_ordering(104) 00:17:15.354 fused_ordering(105) 00:17:15.354 fused_ordering(106) 00:17:15.354 fused_ordering(107) 00:17:15.354 fused_ordering(108) 00:17:15.354 fused_ordering(109) 00:17:15.354 fused_ordering(110) 00:17:15.354 fused_ordering(111) 00:17:15.354 fused_ordering(112) 00:17:15.354 fused_ordering(113) 00:17:15.354 fused_ordering(114) 00:17:15.354 fused_ordering(115) 00:17:15.354 fused_ordering(116) 00:17:15.354 fused_ordering(117) 00:17:15.354 fused_ordering(118) 00:17:15.354 fused_ordering(119) 00:17:15.354 fused_ordering(120) 00:17:15.354 fused_ordering(121) 00:17:15.354 fused_ordering(122) 00:17:15.354 fused_ordering(123) 00:17:15.354 fused_ordering(124) 00:17:15.354 fused_ordering(125) 00:17:15.354 fused_ordering(126) 00:17:15.354 fused_ordering(127) 00:17:15.354 fused_ordering(128) 00:17:15.354 fused_ordering(129) 00:17:15.354 fused_ordering(130) 00:17:15.354 fused_ordering(131) 00:17:15.354 fused_ordering(132) 00:17:15.354 fused_ordering(133) 00:17:15.354 fused_ordering(134) 00:17:15.354 fused_ordering(135) 00:17:15.354 fused_ordering(136) 00:17:15.354 fused_ordering(137) 00:17:15.354 fused_ordering(138) 00:17:15.354 fused_ordering(139) 00:17:15.354 fused_ordering(140) 00:17:15.354 fused_ordering(141) 00:17:15.354 fused_ordering(142) 00:17:15.354 fused_ordering(143) 00:17:15.354 fused_ordering(144) 00:17:15.354 fused_ordering(145) 00:17:15.354 fused_ordering(146) 00:17:15.354 fused_ordering(147) 00:17:15.354 fused_ordering(148) 00:17:15.354 fused_ordering(149) 00:17:15.354 fused_ordering(150) 00:17:15.354 fused_ordering(151) 00:17:15.354 fused_ordering(152) 00:17:15.354 fused_ordering(153) 00:17:15.354 fused_ordering(154) 00:17:15.354 fused_ordering(155) 00:17:15.354 fused_ordering(156) 00:17:15.354 fused_ordering(157) 00:17:15.354 fused_ordering(158) 00:17:15.354 fused_ordering(159) 00:17:15.354 fused_ordering(160) 00:17:15.354 fused_ordering(161) 00:17:15.354 fused_ordering(162) 00:17:15.354 fused_ordering(163) 00:17:15.354 fused_ordering(164) 00:17:15.354 fused_ordering(165) 00:17:15.354 fused_ordering(166) 00:17:15.354 fused_ordering(167) 00:17:15.354 fused_ordering(168) 00:17:15.354 fused_ordering(169) 00:17:15.354 fused_ordering(170) 00:17:15.354 fused_ordering(171) 00:17:15.354 fused_ordering(172) 00:17:15.354 fused_ordering(173) 00:17:15.354 fused_ordering(174) 00:17:15.354 fused_ordering(175) 00:17:15.354 fused_ordering(176) 00:17:15.354 fused_ordering(177) 00:17:15.354 fused_ordering(178) 00:17:15.354 fused_ordering(179) 00:17:15.354 fused_ordering(180) 00:17:15.354 fused_ordering(181) 00:17:15.354 fused_ordering(182) 00:17:15.354 fused_ordering(183) 00:17:15.354 fused_ordering(184) 00:17:15.354 fused_ordering(185) 00:17:15.354 fused_ordering(186) 00:17:15.354 fused_ordering(187) 00:17:15.354 fused_ordering(188) 00:17:15.354 fused_ordering(189) 00:17:15.354 fused_ordering(190) 00:17:15.354 fused_ordering(191) 00:17:15.354 fused_ordering(192) 00:17:15.354 fused_ordering(193) 00:17:15.354 fused_ordering(194) 00:17:15.354 fused_ordering(195) 00:17:15.354 fused_ordering(196) 00:17:15.354 fused_ordering(197) 00:17:15.354 fused_ordering(198) 00:17:15.354 fused_ordering(199) 00:17:15.354 fused_ordering(200) 00:17:15.354 fused_ordering(201) 00:17:15.354 fused_ordering(202) 00:17:15.354 fused_ordering(203) 00:17:15.354 fused_ordering(204) 00:17:15.354 fused_ordering(205) 00:17:15.613 fused_ordering(206) 00:17:15.613 fused_ordering(207) 00:17:15.613 fused_ordering(208) 00:17:15.613 fused_ordering(209) 00:17:15.613 fused_ordering(210) 00:17:15.613 fused_ordering(211) 00:17:15.613 fused_ordering(212) 00:17:15.613 fused_ordering(213) 00:17:15.613 fused_ordering(214) 00:17:15.613 fused_ordering(215) 00:17:15.613 fused_ordering(216) 00:17:15.613 fused_ordering(217) 00:17:15.613 fused_ordering(218) 00:17:15.613 fused_ordering(219) 00:17:15.613 fused_ordering(220) 00:17:15.613 fused_ordering(221) 00:17:15.613 fused_ordering(222) 00:17:15.613 fused_ordering(223) 00:17:15.613 fused_ordering(224) 00:17:15.613 fused_ordering(225) 00:17:15.613 fused_ordering(226) 00:17:15.613 fused_ordering(227) 00:17:15.613 fused_ordering(228) 00:17:15.613 fused_ordering(229) 00:17:15.613 fused_ordering(230) 00:17:15.613 fused_ordering(231) 00:17:15.613 fused_ordering(232) 00:17:15.613 fused_ordering(233) 00:17:15.613 fused_ordering(234) 00:17:15.613 fused_ordering(235) 00:17:15.613 fused_ordering(236) 00:17:15.613 fused_ordering(237) 00:17:15.613 fused_ordering(238) 00:17:15.613 fused_ordering(239) 00:17:15.613 fused_ordering(240) 00:17:15.613 fused_ordering(241) 00:17:15.613 fused_ordering(242) 00:17:15.613 fused_ordering(243) 00:17:15.613 fused_ordering(244) 00:17:15.613 fused_ordering(245) 00:17:15.613 fused_ordering(246) 00:17:15.613 fused_ordering(247) 00:17:15.613 fused_ordering(248) 00:17:15.613 fused_ordering(249) 00:17:15.613 fused_ordering(250) 00:17:15.613 fused_ordering(251) 00:17:15.613 fused_ordering(252) 00:17:15.613 fused_ordering(253) 00:17:15.613 fused_ordering(254) 00:17:15.613 fused_ordering(255) 00:17:15.613 fused_ordering(256) 00:17:15.613 fused_ordering(257) 00:17:15.613 fused_ordering(258) 00:17:15.613 fused_ordering(259) 00:17:15.613 fused_ordering(260) 00:17:15.613 fused_ordering(261) 00:17:15.613 fused_ordering(262) 00:17:15.613 fused_ordering(263) 00:17:15.613 fused_ordering(264) 00:17:15.613 fused_ordering(265) 00:17:15.613 fused_ordering(266) 00:17:15.614 fused_ordering(267) 00:17:15.614 fused_ordering(268) 00:17:15.614 fused_ordering(269) 00:17:15.614 fused_ordering(270) 00:17:15.614 fused_ordering(271) 00:17:15.614 fused_ordering(272) 00:17:15.614 fused_ordering(273) 00:17:15.614 fused_ordering(274) 00:17:15.614 fused_ordering(275) 00:17:15.614 fused_ordering(276) 00:17:15.614 fused_ordering(277) 00:17:15.614 fused_ordering(278) 00:17:15.614 fused_ordering(279) 00:17:15.614 fused_ordering(280) 00:17:15.614 fused_ordering(281) 00:17:15.614 fused_ordering(282) 00:17:15.614 fused_ordering(283) 00:17:15.614 fused_ordering(284) 00:17:15.614 fused_ordering(285) 00:17:15.614 fused_ordering(286) 00:17:15.614 fused_ordering(287) 00:17:15.614 fused_ordering(288) 00:17:15.614 fused_ordering(289) 00:17:15.614 fused_ordering(290) 00:17:15.614 fused_ordering(291) 00:17:15.614 fused_ordering(292) 00:17:15.614 fused_ordering(293) 00:17:15.614 fused_ordering(294) 00:17:15.614 fused_ordering(295) 00:17:15.614 fused_ordering(296) 00:17:15.614 fused_ordering(297) 00:17:15.614 fused_ordering(298) 00:17:15.614 fused_ordering(299) 00:17:15.614 fused_ordering(300) 00:17:15.614 fused_ordering(301) 00:17:15.614 fused_ordering(302) 00:17:15.614 fused_ordering(303) 00:17:15.614 fused_ordering(304) 00:17:15.614 fused_ordering(305) 00:17:15.614 fused_ordering(306) 00:17:15.614 fused_ordering(307) 00:17:15.614 fused_ordering(308) 00:17:15.614 fused_ordering(309) 00:17:15.614 fused_ordering(310) 00:17:15.614 fused_ordering(311) 00:17:15.614 fused_ordering(312) 00:17:15.614 fused_ordering(313) 00:17:15.614 fused_ordering(314) 00:17:15.614 fused_ordering(315) 00:17:15.614 fused_ordering(316) 00:17:15.614 fused_ordering(317) 00:17:15.614 fused_ordering(318) 00:17:15.614 fused_ordering(319) 00:17:15.614 fused_ordering(320) 00:17:15.614 fused_ordering(321) 00:17:15.614 fused_ordering(322) 00:17:15.614 fused_ordering(323) 00:17:15.614 fused_ordering(324) 00:17:15.614 fused_ordering(325) 00:17:15.614 fused_ordering(326) 00:17:15.614 fused_ordering(327) 00:17:15.614 fused_ordering(328) 00:17:15.614 fused_ordering(329) 00:17:15.614 fused_ordering(330) 00:17:15.614 fused_ordering(331) 00:17:15.614 fused_ordering(332) 00:17:15.614 fused_ordering(333) 00:17:15.614 fused_ordering(334) 00:17:15.614 fused_ordering(335) 00:17:15.614 fused_ordering(336) 00:17:15.614 fused_ordering(337) 00:17:15.614 fused_ordering(338) 00:17:15.614 fused_ordering(339) 00:17:15.614 fused_ordering(340) 00:17:15.614 fused_ordering(341) 00:17:15.614 fused_ordering(342) 00:17:15.614 fused_ordering(343) 00:17:15.614 fused_ordering(344) 00:17:15.614 fused_ordering(345) 00:17:15.614 fused_ordering(346) 00:17:15.614 fused_ordering(347) 00:17:15.614 fused_ordering(348) 00:17:15.614 fused_ordering(349) 00:17:15.614 fused_ordering(350) 00:17:15.614 fused_ordering(351) 00:17:15.614 fused_ordering(352) 00:17:15.614 fused_ordering(353) 00:17:15.614 fused_ordering(354) 00:17:15.614 fused_ordering(355) 00:17:15.614 fused_ordering(356) 00:17:15.614 fused_ordering(357) 00:17:15.614 fused_ordering(358) 00:17:15.614 fused_ordering(359) 00:17:15.614 fused_ordering(360) 00:17:15.614 fused_ordering(361) 00:17:15.614 fused_ordering(362) 00:17:15.614 fused_ordering(363) 00:17:15.614 fused_ordering(364) 00:17:15.614 fused_ordering(365) 00:17:15.614 fused_ordering(366) 00:17:15.614 fused_ordering(367) 00:17:15.614 fused_ordering(368) 00:17:15.614 fused_ordering(369) 00:17:15.614 fused_ordering(370) 00:17:15.614 fused_ordering(371) 00:17:15.614 fused_ordering(372) 00:17:15.614 fused_ordering(373) 00:17:15.614 fused_ordering(374) 00:17:15.614 fused_ordering(375) 00:17:15.614 fused_ordering(376) 00:17:15.614 fused_ordering(377) 00:17:15.614 fused_ordering(378) 00:17:15.614 fused_ordering(379) 00:17:15.614 fused_ordering(380) 00:17:15.614 fused_ordering(381) 00:17:15.614 fused_ordering(382) 00:17:15.614 fused_ordering(383) 00:17:15.614 fused_ordering(384) 00:17:15.614 fused_ordering(385) 00:17:15.614 fused_ordering(386) 00:17:15.614 fused_ordering(387) 00:17:15.614 fused_ordering(388) 00:17:15.614 fused_ordering(389) 00:17:15.614 fused_ordering(390) 00:17:15.614 fused_ordering(391) 00:17:15.614 fused_ordering(392) 00:17:15.614 fused_ordering(393) 00:17:15.614 fused_ordering(394) 00:17:15.614 fused_ordering(395) 00:17:15.614 fused_ordering(396) 00:17:15.614 fused_ordering(397) 00:17:15.614 fused_ordering(398) 00:17:15.614 fused_ordering(399) 00:17:15.614 fused_ordering(400) 00:17:15.614 fused_ordering(401) 00:17:15.614 fused_ordering(402) 00:17:15.614 fused_ordering(403) 00:17:15.614 fused_ordering(404) 00:17:15.614 fused_ordering(405) 00:17:15.614 fused_ordering(406) 00:17:15.614 fused_ordering(407) 00:17:15.614 fused_ordering(408) 00:17:15.614 fused_ordering(409) 00:17:15.614 fused_ordering(410) 00:17:15.873 fused_ordering(411) 00:17:15.873 fused_ordering(412) 00:17:15.873 fused_ordering(413) 00:17:15.873 fused_ordering(414) 00:17:15.873 fused_ordering(415) 00:17:15.873 fused_ordering(416) 00:17:15.873 fused_ordering(417) 00:17:15.873 fused_ordering(418) 00:17:15.873 fused_ordering(419) 00:17:15.873 fused_ordering(420) 00:17:15.873 fused_ordering(421) 00:17:15.873 fused_ordering(422) 00:17:15.873 fused_ordering(423) 00:17:15.873 fused_ordering(424) 00:17:15.873 fused_ordering(425) 00:17:15.873 fused_ordering(426) 00:17:15.873 fused_ordering(427) 00:17:15.873 fused_ordering(428) 00:17:15.873 fused_ordering(429) 00:17:15.873 fused_ordering(430) 00:17:15.873 fused_ordering(431) 00:17:15.873 fused_ordering(432) 00:17:15.873 fused_ordering(433) 00:17:15.873 fused_ordering(434) 00:17:15.873 fused_ordering(435) 00:17:15.873 fused_ordering(436) 00:17:15.873 fused_ordering(437) 00:17:15.873 fused_ordering(438) 00:17:15.873 fused_ordering(439) 00:17:15.873 fused_ordering(440) 00:17:15.873 fused_ordering(441) 00:17:15.873 fused_ordering(442) 00:17:15.873 fused_ordering(443) 00:17:15.873 fused_ordering(444) 00:17:15.873 fused_ordering(445) 00:17:15.873 fused_ordering(446) 00:17:15.873 fused_ordering(447) 00:17:15.873 fused_ordering(448) 00:17:15.873 fused_ordering(449) 00:17:15.873 fused_ordering(450) 00:17:15.873 fused_ordering(451) 00:17:15.873 fused_ordering(452) 00:17:15.873 fused_ordering(453) 00:17:15.873 fused_ordering(454) 00:17:15.873 fused_ordering(455) 00:17:15.873 fused_ordering(456) 00:17:15.873 fused_ordering(457) 00:17:15.873 fused_ordering(458) 00:17:15.873 fused_ordering(459) 00:17:15.873 fused_ordering(460) 00:17:15.873 fused_ordering(461) 00:17:15.873 fused_ordering(462) 00:17:15.873 fused_ordering(463) 00:17:15.873 fused_ordering(464) 00:17:15.873 fused_ordering(465) 00:17:15.873 fused_ordering(466) 00:17:15.873 fused_ordering(467) 00:17:15.873 fused_ordering(468) 00:17:15.873 fused_ordering(469) 00:17:15.873 fused_ordering(470) 00:17:15.873 fused_ordering(471) 00:17:15.873 fused_ordering(472) 00:17:15.873 fused_ordering(473) 00:17:15.873 fused_ordering(474) 00:17:15.873 fused_ordering(475) 00:17:15.873 fused_ordering(476) 00:17:15.873 fused_ordering(477) 00:17:15.873 fused_ordering(478) 00:17:15.873 fused_ordering(479) 00:17:15.873 fused_ordering(480) 00:17:15.873 fused_ordering(481) 00:17:15.873 fused_ordering(482) 00:17:15.873 fused_ordering(483) 00:17:15.873 fused_ordering(484) 00:17:15.873 fused_ordering(485) 00:17:15.873 fused_ordering(486) 00:17:15.873 fused_ordering(487) 00:17:15.873 fused_ordering(488) 00:17:15.873 fused_ordering(489) 00:17:15.873 fused_ordering(490) 00:17:15.873 fused_ordering(491) 00:17:15.873 fused_ordering(492) 00:17:15.873 fused_ordering(493) 00:17:15.873 fused_ordering(494) 00:17:15.873 fused_ordering(495) 00:17:15.873 fused_ordering(496) 00:17:15.873 fused_ordering(497) 00:17:15.873 fused_ordering(498) 00:17:15.873 fused_ordering(499) 00:17:15.873 fused_ordering(500) 00:17:15.873 fused_ordering(501) 00:17:15.873 fused_ordering(502) 00:17:15.873 fused_ordering(503) 00:17:15.873 fused_ordering(504) 00:17:15.873 fused_ordering(505) 00:17:15.873 fused_ordering(506) 00:17:15.873 fused_ordering(507) 00:17:15.873 fused_ordering(508) 00:17:15.873 fused_ordering(509) 00:17:15.873 fused_ordering(510) 00:17:15.873 fused_ordering(511) 00:17:15.873 fused_ordering(512) 00:17:15.873 fused_ordering(513) 00:17:15.873 fused_ordering(514) 00:17:15.873 fused_ordering(515) 00:17:15.873 fused_ordering(516) 00:17:15.873 fused_ordering(517) 00:17:15.873 fused_ordering(518) 00:17:15.873 fused_ordering(519) 00:17:15.873 fused_ordering(520) 00:17:15.873 fused_ordering(521) 00:17:15.873 fused_ordering(522) 00:17:15.873 fused_ordering(523) 00:17:15.873 fused_ordering(524) 00:17:15.873 fused_ordering(525) 00:17:15.873 fused_ordering(526) 00:17:15.873 fused_ordering(527) 00:17:15.873 fused_ordering(528) 00:17:15.873 fused_ordering(529) 00:17:15.873 fused_ordering(530) 00:17:15.873 fused_ordering(531) 00:17:15.873 fused_ordering(532) 00:17:15.874 fused_ordering(533) 00:17:15.874 fused_ordering(534) 00:17:15.874 fused_ordering(535) 00:17:15.874 fused_ordering(536) 00:17:15.874 fused_ordering(537) 00:17:15.874 fused_ordering(538) 00:17:15.874 fused_ordering(539) 00:17:15.874 fused_ordering(540) 00:17:15.874 fused_ordering(541) 00:17:15.874 fused_ordering(542) 00:17:15.874 fused_ordering(543) 00:17:15.874 fused_ordering(544) 00:17:15.874 fused_ordering(545) 00:17:15.874 fused_ordering(546) 00:17:15.874 fused_ordering(547) 00:17:15.874 fused_ordering(548) 00:17:15.874 fused_ordering(549) 00:17:15.874 fused_ordering(550) 00:17:15.874 fused_ordering(551) 00:17:15.874 fused_ordering(552) 00:17:15.874 fused_ordering(553) 00:17:15.874 fused_ordering(554) 00:17:15.874 fused_ordering(555) 00:17:15.874 fused_ordering(556) 00:17:15.874 fused_ordering(557) 00:17:15.874 fused_ordering(558) 00:17:15.874 fused_ordering(559) 00:17:15.874 fused_ordering(560) 00:17:15.874 fused_ordering(561) 00:17:15.874 fused_ordering(562) 00:17:15.874 fused_ordering(563) 00:17:15.874 fused_ordering(564) 00:17:15.874 fused_ordering(565) 00:17:15.874 fused_ordering(566) 00:17:15.874 fused_ordering(567) 00:17:15.874 fused_ordering(568) 00:17:15.874 fused_ordering(569) 00:17:15.874 fused_ordering(570) 00:17:15.874 fused_ordering(571) 00:17:15.874 fused_ordering(572) 00:17:15.874 fused_ordering(573) 00:17:15.874 fused_ordering(574) 00:17:15.874 fused_ordering(575) 00:17:15.874 fused_ordering(576) 00:17:15.874 fused_ordering(577) 00:17:15.874 fused_ordering(578) 00:17:15.874 fused_ordering(579) 00:17:15.874 fused_ordering(580) 00:17:15.874 fused_ordering(581) 00:17:15.874 fused_ordering(582) 00:17:15.874 fused_ordering(583) 00:17:15.874 fused_ordering(584) 00:17:15.874 fused_ordering(585) 00:17:15.874 fused_ordering(586) 00:17:15.874 fused_ordering(587) 00:17:15.874 fused_ordering(588) 00:17:15.874 fused_ordering(589) 00:17:15.874 fused_ordering(590) 00:17:15.874 fused_ordering(591) 00:17:15.874 fused_ordering(592) 00:17:15.874 fused_ordering(593) 00:17:15.874 fused_ordering(594) 00:17:15.874 fused_ordering(595) 00:17:15.874 fused_ordering(596) 00:17:15.874 fused_ordering(597) 00:17:15.874 fused_ordering(598) 00:17:15.874 fused_ordering(599) 00:17:15.874 fused_ordering(600) 00:17:15.874 fused_ordering(601) 00:17:15.874 fused_ordering(602) 00:17:15.874 fused_ordering(603) 00:17:15.874 fused_ordering(604) 00:17:15.874 fused_ordering(605) 00:17:15.874 fused_ordering(606) 00:17:15.874 fused_ordering(607) 00:17:15.874 fused_ordering(608) 00:17:15.874 fused_ordering(609) 00:17:15.874 fused_ordering(610) 00:17:15.874 fused_ordering(611) 00:17:15.874 fused_ordering(612) 00:17:15.874 fused_ordering(613) 00:17:15.874 fused_ordering(614) 00:17:15.874 fused_ordering(615) 00:17:16.442 fused_ordering(616) 00:17:16.442 fused_ordering(617) 00:17:16.442 fused_ordering(618) 00:17:16.442 fused_ordering(619) 00:17:16.442 fused_ordering(620) 00:17:16.442 fused_ordering(621) 00:17:16.442 fused_ordering(622) 00:17:16.442 fused_ordering(623) 00:17:16.442 fused_ordering(624) 00:17:16.442 fused_ordering(625) 00:17:16.442 fused_ordering(626) 00:17:16.442 fused_ordering(627) 00:17:16.442 fused_ordering(628) 00:17:16.442 fused_ordering(629) 00:17:16.442 fused_ordering(630) 00:17:16.442 fused_ordering(631) 00:17:16.442 fused_ordering(632) 00:17:16.442 fused_ordering(633) 00:17:16.442 fused_ordering(634) 00:17:16.442 fused_ordering(635) 00:17:16.442 fused_ordering(636) 00:17:16.442 fused_ordering(637) 00:17:16.442 fused_ordering(638) 00:17:16.442 fused_ordering(639) 00:17:16.442 fused_ordering(640) 00:17:16.442 fused_ordering(641) 00:17:16.442 fused_ordering(642) 00:17:16.442 fused_ordering(643) 00:17:16.442 fused_ordering(644) 00:17:16.442 fused_ordering(645) 00:17:16.442 fused_ordering(646) 00:17:16.442 fused_ordering(647) 00:17:16.442 fused_ordering(648) 00:17:16.442 fused_ordering(649) 00:17:16.442 fused_ordering(650) 00:17:16.442 fused_ordering(651) 00:17:16.442 fused_ordering(652) 00:17:16.442 fused_ordering(653) 00:17:16.442 fused_ordering(654) 00:17:16.442 fused_ordering(655) 00:17:16.442 fused_ordering(656) 00:17:16.442 fused_ordering(657) 00:17:16.442 fused_ordering(658) 00:17:16.442 fused_ordering(659) 00:17:16.442 fused_ordering(660) 00:17:16.442 fused_ordering(661) 00:17:16.442 fused_ordering(662) 00:17:16.442 fused_ordering(663) 00:17:16.442 fused_ordering(664) 00:17:16.442 fused_ordering(665) 00:17:16.442 fused_ordering(666) 00:17:16.442 fused_ordering(667) 00:17:16.442 fused_ordering(668) 00:17:16.442 fused_ordering(669) 00:17:16.442 fused_ordering(670) 00:17:16.442 fused_ordering(671) 00:17:16.442 fused_ordering(672) 00:17:16.442 fused_ordering(673) 00:17:16.442 fused_ordering(674) 00:17:16.442 fused_ordering(675) 00:17:16.442 fused_ordering(676) 00:17:16.442 fused_ordering(677) 00:17:16.442 fused_ordering(678) 00:17:16.442 fused_ordering(679) 00:17:16.442 fused_ordering(680) 00:17:16.442 fused_ordering(681) 00:17:16.442 fused_ordering(682) 00:17:16.442 fused_ordering(683) 00:17:16.442 fused_ordering(684) 00:17:16.442 fused_ordering(685) 00:17:16.442 fused_ordering(686) 00:17:16.442 fused_ordering(687) 00:17:16.442 fused_ordering(688) 00:17:16.442 fused_ordering(689) 00:17:16.442 fused_ordering(690) 00:17:16.442 fused_ordering(691) 00:17:16.442 fused_ordering(692) 00:17:16.442 fused_ordering(693) 00:17:16.442 fused_ordering(694) 00:17:16.442 fused_ordering(695) 00:17:16.442 fused_ordering(696) 00:17:16.442 fused_ordering(697) 00:17:16.442 fused_ordering(698) 00:17:16.442 fused_ordering(699) 00:17:16.442 fused_ordering(700) 00:17:16.442 fused_ordering(701) 00:17:16.442 fused_ordering(702) 00:17:16.442 fused_ordering(703) 00:17:16.442 fused_ordering(704) 00:17:16.442 fused_ordering(705) 00:17:16.442 fused_ordering(706) 00:17:16.442 fused_ordering(707) 00:17:16.442 fused_ordering(708) 00:17:16.442 fused_ordering(709) 00:17:16.442 fused_ordering(710) 00:17:16.442 fused_ordering(711) 00:17:16.442 fused_ordering(712) 00:17:16.442 fused_ordering(713) 00:17:16.442 fused_ordering(714) 00:17:16.442 fused_ordering(715) 00:17:16.442 fused_ordering(716) 00:17:16.442 fused_ordering(717) 00:17:16.442 fused_ordering(718) 00:17:16.442 fused_ordering(719) 00:17:16.442 fused_ordering(720) 00:17:16.442 fused_ordering(721) 00:17:16.442 fused_ordering(722) 00:17:16.442 fused_ordering(723) 00:17:16.442 fused_ordering(724) 00:17:16.442 fused_ordering(725) 00:17:16.442 fused_ordering(726) 00:17:16.442 fused_ordering(727) 00:17:16.442 fused_ordering(728) 00:17:16.442 fused_ordering(729) 00:17:16.442 fused_ordering(730) 00:17:16.442 fused_ordering(731) 00:17:16.442 fused_ordering(732) 00:17:16.442 fused_ordering(733) 00:17:16.442 fused_ordering(734) 00:17:16.442 fused_ordering(735) 00:17:16.442 fused_ordering(736) 00:17:16.442 fused_ordering(737) 00:17:16.442 fused_ordering(738) 00:17:16.442 fused_ordering(739) 00:17:16.442 fused_ordering(740) 00:17:16.442 fused_ordering(741) 00:17:16.442 fused_ordering(742) 00:17:16.442 fused_ordering(743) 00:17:16.442 fused_ordering(744) 00:17:16.442 fused_ordering(745) 00:17:16.442 fused_ordering(746) 00:17:16.442 fused_ordering(747) 00:17:16.442 fused_ordering(748) 00:17:16.442 fused_ordering(749) 00:17:16.442 fused_ordering(750) 00:17:16.442 fused_ordering(751) 00:17:16.442 fused_ordering(752) 00:17:16.442 fused_ordering(753) 00:17:16.442 fused_ordering(754) 00:17:16.442 fused_ordering(755) 00:17:16.442 fused_ordering(756) 00:17:16.442 fused_ordering(757) 00:17:16.442 fused_ordering(758) 00:17:16.442 fused_ordering(759) 00:17:16.442 fused_ordering(760) 00:17:16.442 fused_ordering(761) 00:17:16.442 fused_ordering(762) 00:17:16.442 fused_ordering(763) 00:17:16.442 fused_ordering(764) 00:17:16.442 fused_ordering(765) 00:17:16.442 fused_ordering(766) 00:17:16.442 fused_ordering(767) 00:17:16.442 fused_ordering(768) 00:17:16.442 fused_ordering(769) 00:17:16.442 fused_ordering(770) 00:17:16.442 fused_ordering(771) 00:17:16.442 fused_ordering(772) 00:17:16.442 fused_ordering(773) 00:17:16.442 fused_ordering(774) 00:17:16.442 fused_ordering(775) 00:17:16.442 fused_ordering(776) 00:17:16.442 fused_ordering(777) 00:17:16.442 fused_ordering(778) 00:17:16.442 fused_ordering(779) 00:17:16.442 fused_ordering(780) 00:17:16.442 fused_ordering(781) 00:17:16.442 fused_ordering(782) 00:17:16.442 fused_ordering(783) 00:17:16.442 fused_ordering(784) 00:17:16.442 fused_ordering(785) 00:17:16.442 fused_ordering(786) 00:17:16.442 fused_ordering(787) 00:17:16.442 fused_ordering(788) 00:17:16.442 fused_ordering(789) 00:17:16.442 fused_ordering(790) 00:17:16.442 fused_ordering(791) 00:17:16.442 fused_ordering(792) 00:17:16.442 fused_ordering(793) 00:17:16.442 fused_ordering(794) 00:17:16.442 fused_ordering(795) 00:17:16.442 fused_ordering(796) 00:17:16.442 fused_ordering(797) 00:17:16.442 fused_ordering(798) 00:17:16.442 fused_ordering(799) 00:17:16.442 fused_ordering(800) 00:17:16.442 fused_ordering(801) 00:17:16.442 fused_ordering(802) 00:17:16.442 fused_ordering(803) 00:17:16.442 fused_ordering(804) 00:17:16.442 fused_ordering(805) 00:17:16.442 fused_ordering(806) 00:17:16.442 fused_ordering(807) 00:17:16.442 fused_ordering(808) 00:17:16.442 fused_ordering(809) 00:17:16.442 fused_ordering(810) 00:17:16.442 fused_ordering(811) 00:17:16.442 fused_ordering(812) 00:17:16.442 fused_ordering(813) 00:17:16.442 fused_ordering(814) 00:17:16.442 fused_ordering(815) 00:17:16.442 fused_ordering(816) 00:17:16.442 fused_ordering(817) 00:17:16.442 fused_ordering(818) 00:17:16.442 fused_ordering(819) 00:17:16.442 fused_ordering(820) 00:17:16.702 fused_ordering(821) 00:17:16.702 fused_ordering(822) 00:17:16.702 fused_ordering(823) 00:17:16.702 fused_ordering(824) 00:17:16.702 fused_ordering(825) 00:17:16.702 fused_ordering(826) 00:17:16.702 fused_ordering(827) 00:17:16.702 fused_ordering(828) 00:17:16.702 fused_ordering(829) 00:17:16.702 fused_ordering(830) 00:17:16.702 fused_ordering(831) 00:17:16.702 fused_ordering(832) 00:17:16.702 fused_ordering(833) 00:17:16.702 fused_ordering(834) 00:17:16.702 fused_ordering(835) 00:17:16.702 fused_ordering(836) 00:17:16.702 fused_ordering(837) 00:17:16.702 fused_ordering(838) 00:17:16.702 fused_ordering(839) 00:17:16.702 fused_ordering(840) 00:17:16.702 fused_ordering(841) 00:17:16.702 fused_ordering(842) 00:17:16.702 fused_ordering(843) 00:17:16.702 fused_ordering(844) 00:17:16.702 fused_ordering(845) 00:17:16.702 fused_ordering(846) 00:17:16.702 fused_ordering(847) 00:17:16.702 fused_ordering(848) 00:17:16.702 fused_ordering(849) 00:17:16.702 fused_ordering(850) 00:17:16.702 fused_ordering(851) 00:17:16.702 fused_ordering(852) 00:17:16.702 fused_ordering(853) 00:17:16.702 fused_ordering(854) 00:17:16.702 fused_ordering(855) 00:17:16.702 fused_ordering(856) 00:17:16.702 fused_ordering(857) 00:17:16.702 fused_ordering(858) 00:17:16.702 fused_ordering(859) 00:17:16.702 fused_ordering(860) 00:17:16.702 fused_ordering(861) 00:17:16.702 fused_ordering(862) 00:17:16.702 fused_ordering(863) 00:17:16.702 fused_ordering(864) 00:17:16.702 fused_ordering(865) 00:17:16.702 fused_ordering(866) 00:17:16.702 fused_ordering(867) 00:17:16.702 fused_ordering(868) 00:17:16.702 fused_ordering(869) 00:17:16.702 fused_ordering(870) 00:17:16.702 fused_ordering(871) 00:17:16.702 fused_ordering(872) 00:17:16.702 fused_ordering(873) 00:17:16.702 fused_ordering(874) 00:17:16.702 fused_ordering(875) 00:17:16.702 fused_ordering(876) 00:17:16.702 fused_ordering(877) 00:17:16.702 fused_ordering(878) 00:17:16.702 fused_ordering(879) 00:17:16.702 fused_ordering(880) 00:17:16.702 fused_ordering(881) 00:17:16.702 fused_ordering(882) 00:17:16.702 fused_ordering(883) 00:17:16.702 fused_ordering(884) 00:17:16.702 fused_ordering(885) 00:17:16.702 fused_ordering(886) 00:17:16.702 fused_ordering(887) 00:17:16.702 fused_ordering(888) 00:17:16.702 fused_ordering(889) 00:17:16.702 fused_ordering(890) 00:17:16.702 fused_ordering(891) 00:17:16.702 fused_ordering(892) 00:17:16.702 fused_ordering(893) 00:17:16.702 fused_ordering(894) 00:17:16.702 fused_ordering(895) 00:17:16.702 fused_ordering(896) 00:17:16.702 fused_ordering(897) 00:17:16.702 fused_ordering(898) 00:17:16.702 fused_ordering(899) 00:17:16.702 fused_ordering(900) 00:17:16.702 fused_ordering(901) 00:17:16.702 fused_ordering(902) 00:17:16.702 fused_ordering(903) 00:17:16.702 fused_ordering(904) 00:17:16.702 fused_ordering(905) 00:17:16.702 fused_ordering(906) 00:17:16.702 fused_ordering(907) 00:17:16.702 fused_ordering(908) 00:17:16.702 fused_ordering(909) 00:17:16.702 fused_ordering(910) 00:17:16.702 fused_ordering(911) 00:17:16.702 fused_ordering(912) 00:17:16.702 fused_ordering(913) 00:17:16.702 fused_ordering(914) 00:17:16.702 fused_ordering(915) 00:17:16.702 fused_ordering(916) 00:17:16.702 fused_ordering(917) 00:17:16.702 fused_ordering(918) 00:17:16.702 fused_ordering(919) 00:17:16.702 fused_ordering(920) 00:17:16.702 fused_ordering(921) 00:17:16.702 fused_ordering(922) 00:17:16.702 fused_ordering(923) 00:17:16.702 fused_ordering(924) 00:17:16.702 fused_ordering(925) 00:17:16.702 fused_ordering(926) 00:17:16.702 fused_ordering(927) 00:17:16.702 fused_ordering(928) 00:17:16.702 fused_ordering(929) 00:17:16.702 fused_ordering(930) 00:17:16.702 fused_ordering(931) 00:17:16.702 fused_ordering(932) 00:17:16.702 fused_ordering(933) 00:17:16.702 fused_ordering(934) 00:17:16.702 fused_ordering(935) 00:17:16.702 fused_ordering(936) 00:17:16.702 fused_ordering(937) 00:17:16.702 fused_ordering(938) 00:17:16.702 fused_ordering(939) 00:17:16.702 fused_ordering(940) 00:17:16.702 fused_ordering(941) 00:17:16.702 fused_ordering(942) 00:17:16.702 fused_ordering(943) 00:17:16.702 fused_ordering(944) 00:17:16.702 fused_ordering(945) 00:17:16.702 fused_ordering(946) 00:17:16.702 fused_ordering(947) 00:17:16.702 fused_ordering(948) 00:17:16.702 fused_ordering(949) 00:17:16.702 fused_ordering(950) 00:17:16.702 fused_ordering(951) 00:17:16.702 fused_ordering(952) 00:17:16.702 fused_ordering(953) 00:17:16.702 fused_ordering(954) 00:17:16.702 fused_ordering(955) 00:17:16.702 fused_ordering(956) 00:17:16.702 fused_ordering(957) 00:17:16.702 fused_ordering(958) 00:17:16.702 fused_ordering(959) 00:17:16.702 fused_ordering(960) 00:17:16.702 fused_ordering(961) 00:17:16.702 fused_ordering(962) 00:17:16.702 fused_ordering(963) 00:17:16.702 fused_ordering(964) 00:17:16.702 fused_ordering(965) 00:17:16.702 fused_ordering(966) 00:17:16.702 fused_ordering(967) 00:17:16.702 fused_ordering(968) 00:17:16.702 fused_ordering(969) 00:17:16.702 fused_ordering(970) 00:17:16.702 fused_ordering(971) 00:17:16.702 fused_ordering(972) 00:17:16.702 fused_ordering(973) 00:17:16.702 fused_ordering(974) 00:17:16.702 fused_ordering(975) 00:17:16.702 fused_ordering(976) 00:17:16.702 fused_ordering(977) 00:17:16.702 fused_ordering(978) 00:17:16.702 fused_ordering(979) 00:17:16.702 fused_ordering(980) 00:17:16.702 fused_ordering(981) 00:17:16.702 fused_ordering(982) 00:17:16.702 fused_ordering(983) 00:17:16.702 fused_ordering(984) 00:17:16.702 fused_ordering(985) 00:17:16.702 fused_ordering(986) 00:17:16.702 fused_ordering(987) 00:17:16.702 fused_ordering(988) 00:17:16.702 fused_ordering(989) 00:17:16.702 fused_ordering(990) 00:17:16.702 fused_ordering(991) 00:17:16.702 fused_ordering(992) 00:17:16.702 fused_ordering(993) 00:17:16.702 fused_ordering(994) 00:17:16.702 fused_ordering(995) 00:17:16.702 fused_ordering(996) 00:17:16.702 fused_ordering(997) 00:17:16.702 fused_ordering(998) 00:17:16.702 fused_ordering(999) 00:17:16.703 fused_ordering(1000) 00:17:16.703 fused_ordering(1001) 00:17:16.703 fused_ordering(1002) 00:17:16.703 fused_ordering(1003) 00:17:16.703 fused_ordering(1004) 00:17:16.703 fused_ordering(1005) 00:17:16.703 fused_ordering(1006) 00:17:16.703 fused_ordering(1007) 00:17:16.703 fused_ordering(1008) 00:17:16.703 fused_ordering(1009) 00:17:16.703 fused_ordering(1010) 00:17:16.703 fused_ordering(1011) 00:17:16.703 fused_ordering(1012) 00:17:16.703 fused_ordering(1013) 00:17:16.703 fused_ordering(1014) 00:17:16.703 fused_ordering(1015) 00:17:16.703 fused_ordering(1016) 00:17:16.703 fused_ordering(1017) 00:17:16.703 fused_ordering(1018) 00:17:16.703 fused_ordering(1019) 00:17:16.703 fused_ordering(1020) 00:17:16.703 fused_ordering(1021) 00:17:16.703 fused_ordering(1022) 00:17:16.703 fused_ordering(1023) 00:17:16.703 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:16.703 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:16.703 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:16.703 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:16.703 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:16.703 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:16.703 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:16.703 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:16.703 rmmod nvme_tcp 00:17:16.703 rmmod nvme_fabrics 00:17:16.703 rmmod nvme_keyring 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 617510 ']' 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 617510 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 617510 ']' 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 617510 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 617510 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 617510' 00:17:16.962 killing process with pid 617510 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 617510 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 617510 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.962 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:19.495 00:17:19.495 real 0m10.682s 00:17:19.495 user 0m4.917s 00:17:19.495 sys 0m5.833s 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.495 ************************************ 00:17:19.495 END TEST nvmf_fused_ordering 00:17:19.495 ************************************ 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:19.495 ************************************ 00:17:19.495 START TEST nvmf_ns_masking 00:17:19.495 ************************************ 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:19.495 * Looking for test storage... 00:17:19.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:19.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.495 --rc genhtml_branch_coverage=1 00:17:19.495 --rc genhtml_function_coverage=1 00:17:19.495 --rc genhtml_legend=1 00:17:19.495 --rc geninfo_all_blocks=1 00:17:19.495 --rc geninfo_unexecuted_blocks=1 00:17:19.495 00:17:19.495 ' 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:19.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.495 --rc genhtml_branch_coverage=1 00:17:19.495 --rc genhtml_function_coverage=1 00:17:19.495 --rc genhtml_legend=1 00:17:19.495 --rc geninfo_all_blocks=1 00:17:19.495 --rc geninfo_unexecuted_blocks=1 00:17:19.495 00:17:19.495 ' 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:19.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.495 --rc genhtml_branch_coverage=1 00:17:19.495 --rc genhtml_function_coverage=1 00:17:19.495 --rc genhtml_legend=1 00:17:19.495 --rc geninfo_all_blocks=1 00:17:19.495 --rc geninfo_unexecuted_blocks=1 00:17:19.495 00:17:19.495 ' 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:19.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.495 --rc genhtml_branch_coverage=1 00:17:19.495 --rc genhtml_function_coverage=1 00:17:19.495 --rc genhtml_legend=1 00:17:19.495 --rc geninfo_all_blocks=1 00:17:19.495 --rc geninfo_unexecuted_blocks=1 00:17:19.495 00:17:19.495 ' 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.495 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=777630a7-3886-4c45-98ea-5be3d0022b2c 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a3de2492-c7bc-469d-bfa4-96f59e1fc7b9 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=424e206c-8ec1-40bc-b7af-aa01baca434a 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:19.496 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:26.064 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.064 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:26.065 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:26.065 Found net devices under 0000:86:00.0: cvl_0_0 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:26.065 Found net devices under 0000:86:00.1: cvl_0_1 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:26.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:17:26.065 00:17:26.065 --- 10.0.0.2 ping statistics --- 00:17:26.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.065 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:17:26.065 00:17:26.065 --- 10.0.0.1 ping statistics --- 00:17:26.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.065 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=621311 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 621311 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 621311 ']' 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.065 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.066 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.066 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:26.066 [2024-12-05 13:49:07.938347] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:26.066 [2024-12-05 13:49:07.938396] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.066 [2024-12-05 13:49:08.019111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.066 [2024-12-05 13:49:08.060633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.066 [2024-12-05 13:49:08.060666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.066 [2024-12-05 13:49:08.060674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.066 [2024-12-05 13:49:08.060680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.066 [2024-12-05 13:49:08.060685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.066 [2024-12-05 13:49:08.061211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.325 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.325 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:26.325 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:26.325 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:26.325 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:26.325 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.325 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:26.583 [2024-12-05 13:49:08.966924] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.583 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:26.583 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:26.584 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:26.842 Malloc1 00:17:26.842 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:26.842 Malloc2 00:17:26.842 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:27.100 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:27.359 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.618 [2024-12-05 13:49:09.978034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.618 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:27.618 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 424e206c-8ec1-40bc-b7af-aa01baca434a -a 10.0.0.2 -s 4420 -i 4 00:17:27.618 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:27.618 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:27.618 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:27.618 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:27.618 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:30.153 [ 0]:0x1 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ef6498a9e69c453e9c419540bb520de5 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ef6498a9e69c453e9c419540bb520de5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:30.153 [ 0]:0x1 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ef6498a9e69c453e9c419540bb520de5 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ef6498a9e69c453e9c419540bb520de5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:30.153 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:30.154 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:30.154 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:30.154 [ 1]:0x2 00:17:30.154 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:30.154 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:30.154 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebd9dee8aea64d5d982cf9b6dc2a093e 00:17:30.154 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebd9dee8aea64d5d982cf9b6dc2a093e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:30.154 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:30.154 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:30.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.154 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:30.413 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:30.683 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:30.683 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 424e206c-8ec1-40bc-b7af-aa01baca434a -a 10.0.0.2 -s 4420 -i 4 00:17:30.683 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:30.941 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:30.941 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:30.941 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:30.941 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:30.941 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:32.842 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:33.098 [ 0]:0x2 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebd9dee8aea64d5d982cf9b6dc2a093e 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebd9dee8aea64d5d982cf9b6dc2a093e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:33.098 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:33.355 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:33.355 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:33.355 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:33.355 [ 0]:0x1 00:17:33.356 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:33.356 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:33.356 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ef6498a9e69c453e9c419540bb520de5 00:17:33.356 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ef6498a9e69c453e9c419540bb520de5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:33.356 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:33.356 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:33.356 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:33.356 [ 1]:0x2 00:17:33.356 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:33.356 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:33.356 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebd9dee8aea64d5d982cf9b6dc2a093e 00:17:33.356 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebd9dee8aea64d5d982cf9b6dc2a093e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:33.356 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:33.614 [ 0]:0x2 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebd9dee8aea64d5d982cf9b6dc2a093e 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebd9dee8aea64d5d982cf9b6dc2a093e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:33.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.614 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:33.871 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:33.871 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 424e206c-8ec1-40bc-b7af-aa01baca434a -a 10.0.0.2 -s 4420 -i 4 00:17:34.128 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:34.128 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:34.128 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:34.128 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:34.128 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:34.128 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:36.032 [ 0]:0x1 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ef6498a9e69c453e9c419540bb520de5 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ef6498a9e69c453e9c419540bb520de5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.032 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:36.289 [ 1]:0x2 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebd9dee8aea64d5d982cf9b6dc2a093e 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebd9dee8aea64d5d982cf9b6dc2a093e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.289 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:36.547 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:36.547 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.547 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:36.547 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.547 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:36.547 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:36.547 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:36.547 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:36.547 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:36.547 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.547 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:36.547 [ 0]:0x2 00:17:36.547 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:36.547 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebd9dee8aea64d5d982cf9b6dc2a093e 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebd9dee8aea64d5d982cf9b6dc2a093e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:36.547 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:36.805 [2024-12-05 13:49:19.184001] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:36.805 request: 00:17:36.805 { 00:17:36.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.805 "nsid": 2, 00:17:36.805 "host": "nqn.2016-06.io.spdk:host1", 00:17:36.805 "method": "nvmf_ns_remove_host", 00:17:36.805 "req_id": 1 00:17:36.805 } 00:17:36.805 Got JSON-RPC error response 00:17:36.805 response: 00:17:36.805 { 00:17:36.805 "code": -32602, 00:17:36.805 "message": "Invalid parameters" 00:17:36.805 } 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:36.805 [ 0]:0x2 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebd9dee8aea64d5d982cf9b6dc2a093e 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebd9dee8aea64d5d982cf9b6dc2a093e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:36.805 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:37.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.063 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=623373 00:17:37.063 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.063 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 623373 /var/tmp/host.sock 00:17:37.063 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:37.063 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 623373 ']' 00:17:37.063 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:37.063 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.063 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:37.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:37.063 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.063 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:37.063 [2024-12-05 13:49:19.542102] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:37.063 [2024-12-05 13:49:19.542157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623373 ] 00:17:37.063 [2024-12-05 13:49:19.620057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.321 [2024-12-05 13:49:19.663095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.887 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.887 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:37.887 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:38.145 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:38.403 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 777630a7-3886-4c45-98ea-5be3d0022b2c 00:17:38.403 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:38.403 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 777630A738864C4598EA5BE3D0022B2C -i 00:17:38.403 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a3de2492-c7bc-469d-bfa4-96f59e1fc7b9 00:17:38.403 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:38.403 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A3DE2492C7BC469DBFA496F59E1FC7B9 -i 00:17:38.661 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:38.919 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:39.178 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:39.178 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:39.437 nvme0n1 00:17:39.437 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:39.437 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:39.696 nvme1n2 00:17:39.954 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:39.954 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:39.954 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:39.954 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:39.954 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:39.954 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:39.954 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:39.954 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:39.954 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:40.212 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 777630a7-3886-4c45-98ea-5be3d0022b2c == \7\7\7\6\3\0\a\7\-\3\8\8\6\-\4\c\4\5\-\9\8\e\a\-\5\b\e\3\d\0\0\2\2\b\2\c ]] 00:17:40.212 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:40.212 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:40.212 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:40.470 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a3de2492-c7bc-469d-bfa4-96f59e1fc7b9 == \a\3\d\e\2\4\9\2\-\c\7\b\c\-\4\6\9\d\-\b\f\a\4\-\9\6\f\5\9\e\1\f\c\7\b\9 ]] 00:17:40.470 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 777630a7-3886-4c45-98ea-5be3d0022b2c 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 777630A738864C4598EA5BE3D0022B2C 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 777630A738864C4598EA5BE3D0022B2C 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:40.728 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 777630A738864C4598EA5BE3D0022B2C 00:17:40.986 [2024-12-05 13:49:23.447713] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:40.986 [2024-12-05 13:49:23.447743] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:40.986 [2024-12-05 13:49:23.447752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.986 request: 00:17:40.986 { 00:17:40.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.986 "namespace": { 00:17:40.986 "bdev_name": "invalid", 00:17:40.986 "nsid": 1, 00:17:40.986 "nguid": "777630A738864C4598EA5BE3D0022B2C", 00:17:40.986 "no_auto_visible": false, 00:17:40.986 "hide_metadata": false 00:17:40.986 }, 00:17:40.986 "method": "nvmf_subsystem_add_ns", 00:17:40.986 "req_id": 1 00:17:40.986 } 00:17:40.986 Got JSON-RPC error response 00:17:40.986 response: 00:17:40.986 { 00:17:40.986 "code": -32602, 00:17:40.986 "message": "Invalid parameters" 00:17:40.986 } 00:17:40.986 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:40.986 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.986 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.986 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.986 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 777630a7-3886-4c45-98ea-5be3d0022b2c 00:17:40.986 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:40.986 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 777630A738864C4598EA5BE3D0022B2C -i 00:17:41.245 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:43.149 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:43.149 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:43.149 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:43.449 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:43.450 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 623373 00:17:43.450 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 623373 ']' 00:17:43.450 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 623373 00:17:43.450 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:43.450 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.450 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 623373 00:17:43.450 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:43.450 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:43.450 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 623373' 00:17:43.450 killing process with pid 623373 00:17:43.450 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 623373 00:17:43.450 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 623373 00:17:43.820 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:44.079 rmmod nvme_tcp 00:17:44.079 rmmod nvme_fabrics 00:17:44.079 rmmod nvme_keyring 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 621311 ']' 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 621311 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 621311 ']' 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 621311 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 621311 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 621311' 00:17:44.079 killing process with pid 621311 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 621311 00:17:44.079 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 621311 00:17:44.338 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:44.338 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:44.338 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:44.338 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:44.338 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:44.338 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:44.338 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:44.338 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:44.338 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:44.338 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.338 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.338 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.872 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:46.872 00:17:46.872 real 0m27.170s 00:17:46.872 user 0m33.044s 00:17:46.872 sys 0m7.124s 00:17:46.872 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.872 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:46.872 ************************************ 00:17:46.872 END TEST nvmf_ns_masking 00:17:46.872 ************************************ 00:17:46.872 13:49:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:46.872 13:49:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:46.872 13:49:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:46.872 13:49:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.872 13:49:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:46.872 ************************************ 00:17:46.872 START TEST nvmf_nvme_cli 00:17:46.872 ************************************ 00:17:46.872 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:46.872 * Looking for test storage... 00:17:46.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:46.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.872 --rc genhtml_branch_coverage=1 00:17:46.872 --rc genhtml_function_coverage=1 00:17:46.872 --rc genhtml_legend=1 00:17:46.872 --rc geninfo_all_blocks=1 00:17:46.872 --rc geninfo_unexecuted_blocks=1 00:17:46.872 00:17:46.872 ' 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:46.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.872 --rc genhtml_branch_coverage=1 00:17:46.872 --rc genhtml_function_coverage=1 00:17:46.872 --rc genhtml_legend=1 00:17:46.872 --rc geninfo_all_blocks=1 00:17:46.872 --rc geninfo_unexecuted_blocks=1 00:17:46.872 00:17:46.872 ' 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:46.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.872 --rc genhtml_branch_coverage=1 00:17:46.872 --rc genhtml_function_coverage=1 00:17:46.872 --rc genhtml_legend=1 00:17:46.872 --rc geninfo_all_blocks=1 00:17:46.872 --rc geninfo_unexecuted_blocks=1 00:17:46.872 00:17:46.872 ' 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:46.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.872 --rc genhtml_branch_coverage=1 00:17:46.872 --rc genhtml_function_coverage=1 00:17:46.872 --rc genhtml_legend=1 00:17:46.872 --rc geninfo_all_blocks=1 00:17:46.872 --rc geninfo_unexecuted_blocks=1 00:17:46.872 00:17:46.872 ' 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.872 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:46.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:46.873 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.439 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.439 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:53.439 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:53.439 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:53.439 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:53.439 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:53.439 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:53.439 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:53.439 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:53.439 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:53.439 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:53.439 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:53.440 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:53.440 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:53.440 Found net devices under 0000:86:00.0: cvl_0_0 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:53.440 Found net devices under 0000:86:00.1: cvl_0_1 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:53.440 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:53.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:17:53.440 00:17:53.440 --- 10.0.0.2 ping statistics --- 00:17:53.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.440 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:17:53.440 00:17:53.440 --- 10.0.0.1 ping statistics --- 00:17:53.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.440 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=628165 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 628165 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 628165 ']' 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.440 [2024-12-05 13:49:35.163000] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:53.440 [2024-12-05 13:49:35.163046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.440 [2024-12-05 13:49:35.242282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.440 [2024-12-05 13:49:35.285787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.440 [2024-12-05 13:49:35.285824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.440 [2024-12-05 13:49:35.285830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.440 [2024-12-05 13:49:35.285836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.440 [2024-12-05 13:49:35.285842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.440 [2024-12-05 13:49:35.287285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.440 [2024-12-05 13:49:35.287422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.440 [2024-12-05 13:49:35.287461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.440 [2024-12-05 13:49:35.287461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.440 [2024-12-05 13:49:35.426075] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.440 Malloc0 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.440 Malloc1 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.440 [2024-12-05 13:49:35.519105] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.440 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.441 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.441 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:17:53.441 00:17:53.441 Discovery Log Number of Records 2, Generation counter 2 00:17:53.441 =====Discovery Log Entry 0====== 00:17:53.441 trtype: tcp 00:17:53.441 adrfam: ipv4 00:17:53.441 subtype: current discovery subsystem 00:17:53.441 treq: not required 00:17:53.441 portid: 0 00:17:53.441 trsvcid: 4420 00:17:53.441 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:53.441 traddr: 10.0.0.2 00:17:53.441 eflags: explicit discovery connections, duplicate discovery information 00:17:53.441 sectype: none 00:17:53.441 =====Discovery Log Entry 1====== 00:17:53.441 trtype: tcp 00:17:53.441 adrfam: ipv4 00:17:53.441 subtype: nvme subsystem 00:17:53.441 treq: not required 00:17:53.441 portid: 0 00:17:53.441 trsvcid: 4420 00:17:53.441 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:53.441 traddr: 10.0.0.2 00:17:53.441 eflags: none 00:17:53.441 sectype: none 00:17:53.441 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:53.441 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:53.441 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:53.441 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:53.441 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:53.441 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:53.441 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:53.441 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:53.441 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:53.441 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:53.441 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:54.388 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:54.388 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:54.388 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:54.388 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:54.388 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:54.388 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:56.288 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:56.288 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:56.288 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:56.288 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:56.288 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:56.288 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:56.288 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:56.288 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:56.288 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.288 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:56.545 /dev/nvme0n2 ]] 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.545 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:56.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:56.803 rmmod nvme_tcp 00:17:56.803 rmmod nvme_fabrics 00:17:56.803 rmmod nvme_keyring 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 628165 ']' 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 628165 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 628165 ']' 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 628165 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.803 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 628165 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 628165' 00:17:57.062 killing process with pid 628165 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 628165 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 628165 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.062 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:59.593 00:17:59.593 real 0m12.785s 00:17:59.593 user 0m19.018s 00:17:59.593 sys 0m5.110s 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.593 ************************************ 00:17:59.593 END TEST nvmf_nvme_cli 00:17:59.593 ************************************ 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:59.593 ************************************ 00:17:59.593 START TEST nvmf_vfio_user 00:17:59.593 ************************************ 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:59.593 * Looking for test storage... 00:17:59.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.593 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:59.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.594 --rc genhtml_branch_coverage=1 00:17:59.594 --rc genhtml_function_coverage=1 00:17:59.594 --rc genhtml_legend=1 00:17:59.594 --rc geninfo_all_blocks=1 00:17:59.594 --rc geninfo_unexecuted_blocks=1 00:17:59.594 00:17:59.594 ' 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:59.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.594 --rc genhtml_branch_coverage=1 00:17:59.594 --rc genhtml_function_coverage=1 00:17:59.594 --rc genhtml_legend=1 00:17:59.594 --rc geninfo_all_blocks=1 00:17:59.594 --rc geninfo_unexecuted_blocks=1 00:17:59.594 00:17:59.594 ' 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:59.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.594 --rc genhtml_branch_coverage=1 00:17:59.594 --rc genhtml_function_coverage=1 00:17:59.594 --rc genhtml_legend=1 00:17:59.594 --rc geninfo_all_blocks=1 00:17:59.594 --rc geninfo_unexecuted_blocks=1 00:17:59.594 00:17:59.594 ' 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:59.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.594 --rc genhtml_branch_coverage=1 00:17:59.594 --rc genhtml_function_coverage=1 00:17:59.594 --rc genhtml_legend=1 00:17:59.594 --rc geninfo_all_blocks=1 00:17:59.594 --rc geninfo_unexecuted_blocks=1 00:17:59.594 00:17:59.594 ' 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:59.594 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:59.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=629368 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 629368' 00:17:59.595 Process pid: 629368 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 629368 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 629368 ']' 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.595 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:59.595 [2024-12-05 13:49:42.044074] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:59.595 [2024-12-05 13:49:42.044121] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.595 [2024-12-05 13:49:42.120516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.595 [2024-12-05 13:49:42.162508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.595 [2024-12-05 13:49:42.162546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.595 [2024-12-05 13:49:42.162554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.595 [2024-12-05 13:49:42.162560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.595 [2024-12-05 13:49:42.162564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.595 [2024-12-05 13:49:42.164149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.595 [2024-12-05 13:49:42.164257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.595 [2024-12-05 13:49:42.164371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.595 [2024-12-05 13:49:42.164381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.854 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.854 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:59.854 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:00.789 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:01.046 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:01.046 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:01.046 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:01.046 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:01.046 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:01.305 Malloc1 00:18:01.305 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:01.563 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:01.563 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:01.820 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:01.820 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:01.820 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:02.078 Malloc2 00:18:02.078 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:02.336 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:02.336 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:02.593 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:02.593 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:02.593 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:02.593 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:02.593 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:02.593 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:02.593 [2024-12-05 13:49:45.161032] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:02.594 [2024-12-05 13:49:45.161071] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630044 ] 00:18:02.852 [2024-12-05 13:49:45.199829] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:02.852 [2024-12-05 13:49:45.205203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:02.852 [2024-12-05 13:49:45.205224] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1ad9d66000 00:18:02.852 [2024-12-05 13:49:45.206207] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:02.852 [2024-12-05 13:49:45.207210] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:02.852 [2024-12-05 13:49:45.208217] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:02.852 [2024-12-05 13:49:45.209219] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:02.852 [2024-12-05 13:49:45.210223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:02.852 [2024-12-05 13:49:45.211237] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:02.852 [2024-12-05 13:49:45.212232] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:02.852 [2024-12-05 13:49:45.213244] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:02.852 [2024-12-05 13:49:45.214253] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:02.852 [2024-12-05 13:49:45.214262] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1ad9d5b000 00:18:02.852 [2024-12-05 13:49:45.215175] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:02.852 [2024-12-05 13:49:45.228624] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:02.852 [2024-12-05 13:49:45.228647] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:02.852 [2024-12-05 13:49:45.231369] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:02.852 [2024-12-05 13:49:45.231406] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:02.852 [2024-12-05 13:49:45.231474] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:02.852 [2024-12-05 13:49:45.231488] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:02.852 [2024-12-05 13:49:45.231493] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:02.852 [2024-12-05 13:49:45.232363] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:02.852 [2024-12-05 13:49:45.232377] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:02.852 [2024-12-05 13:49:45.232384] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:02.852 [2024-12-05 13:49:45.233372] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:02.852 [2024-12-05 13:49:45.233384] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:02.852 [2024-12-05 13:49:45.233391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:02.852 [2024-12-05 13:49:45.234377] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:02.852 [2024-12-05 13:49:45.234385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:02.853 [2024-12-05 13:49:45.235384] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:02.853 [2024-12-05 13:49:45.235392] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:02.853 [2024-12-05 13:49:45.235396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:02.853 [2024-12-05 13:49:45.235402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:02.853 [2024-12-05 13:49:45.235509] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:02.853 [2024-12-05 13:49:45.235513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:02.853 [2024-12-05 13:49:45.235518] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:02.853 [2024-12-05 13:49:45.236390] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:02.853 [2024-12-05 13:49:45.237395] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:02.853 [2024-12-05 13:49:45.238408] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:02.853 [2024-12-05 13:49:45.239401] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:02.853 [2024-12-05 13:49:45.239460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:02.853 [2024-12-05 13:49:45.240412] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:02.853 [2024-12-05 13:49:45.240420] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:02.853 [2024-12-05 13:49:45.240424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240440] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:02.853 [2024-12-05 13:49:45.240447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240465] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:02.853 [2024-12-05 13:49:45.240470] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:02.853 [2024-12-05 13:49:45.240474] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.853 [2024-12-05 13:49:45.240485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.240522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.240531] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:02.853 [2024-12-05 13:49:45.240536] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:02.853 [2024-12-05 13:49:45.240540] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:02.853 [2024-12-05 13:49:45.240545] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:02.853 [2024-12-05 13:49:45.240550] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:02.853 [2024-12-05 13:49:45.240554] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:02.853 [2024-12-05 13:49:45.240558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240573] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.240587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.240596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.853 [2024-12-05 13:49:45.240604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.853 [2024-12-05 13:49:45.240611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.853 [2024-12-05 13:49:45.240618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.853 [2024-12-05 13:49:45.240623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.240647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.240652] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:02.853 [2024-12-05 13:49:45.240656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240663] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.240685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.240735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240749] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:02.853 [2024-12-05 13:49:45.240752] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:02.853 [2024-12-05 13:49:45.240755] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.853 [2024-12-05 13:49:45.240761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.240776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.240785] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:02.853 [2024-12-05 13:49:45.240794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240807] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:02.853 [2024-12-05 13:49:45.240811] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:02.853 [2024-12-05 13:49:45.240814] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.853 [2024-12-05 13:49:45.240819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.240839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.240848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240861] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:02.853 [2024-12-05 13:49:45.240865] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:02.853 [2024-12-05 13:49:45.240868] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.853 [2024-12-05 13:49:45.240873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.240883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.240891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240923] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:02.853 [2024-12-05 13:49:45.240927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:02.853 [2024-12-05 13:49:45.240932] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:02.853 [2024-12-05 13:49:45.240947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.240959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.240970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.240980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.240989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.241001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.241011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.241019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.241032] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:02.853 [2024-12-05 13:49:45.241036] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:02.853 [2024-12-05 13:49:45.241039] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:02.853 [2024-12-05 13:49:45.241043] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:02.853 [2024-12-05 13:49:45.241046] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:02.853 [2024-12-05 13:49:45.241051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:02.853 [2024-12-05 13:49:45.241057] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:02.853 [2024-12-05 13:49:45.241061] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:02.853 [2024-12-05 13:49:45.241064] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.853 [2024-12-05 13:49:45.241069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.241076] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:02.853 [2024-12-05 13:49:45.241080] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:02.853 [2024-12-05 13:49:45.241082] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.853 [2024-12-05 13:49:45.241088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.241095] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:02.853 [2024-12-05 13:49:45.241099] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:02.853 [2024-12-05 13:49:45.241103] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.853 [2024-12-05 13:49:45.241108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:02.853 [2024-12-05 13:49:45.241114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.241123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.241132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:02.853 [2024-12-05 13:49:45.241139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:02.853 ===================================================== 00:18:02.853 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:02.853 ===================================================== 00:18:02.853 Controller Capabilities/Features 00:18:02.853 ================================ 00:18:02.853 Vendor ID: 4e58 00:18:02.853 Subsystem Vendor ID: 4e58 00:18:02.853 Serial Number: SPDK1 00:18:02.853 Model Number: SPDK bdev Controller 00:18:02.853 Firmware Version: 25.01 00:18:02.853 Recommended Arb Burst: 6 00:18:02.853 IEEE OUI Identifier: 8d 6b 50 00:18:02.853 Multi-path I/O 00:18:02.853 May have multiple subsystem ports: Yes 00:18:02.853 May have multiple controllers: Yes 00:18:02.853 Associated with SR-IOV VF: No 00:18:02.853 Max Data Transfer Size: 131072 00:18:02.853 Max Number of Namespaces: 32 00:18:02.853 Max Number of I/O Queues: 127 00:18:02.853 NVMe Specification Version (VS): 1.3 00:18:02.853 NVMe Specification Version (Identify): 1.3 00:18:02.853 Maximum Queue Entries: 256 00:18:02.854 Contiguous Queues Required: Yes 00:18:02.854 Arbitration Mechanisms Supported 00:18:02.854 Weighted Round Robin: Not Supported 00:18:02.854 Vendor Specific: Not Supported 00:18:02.854 Reset Timeout: 15000 ms 00:18:02.854 Doorbell Stride: 4 bytes 00:18:02.854 NVM Subsystem Reset: Not Supported 00:18:02.854 Command Sets Supported 00:18:02.854 NVM Command Set: Supported 00:18:02.854 Boot Partition: Not Supported 00:18:02.854 Memory Page Size Minimum: 4096 bytes 00:18:02.854 Memory Page Size Maximum: 4096 bytes 00:18:02.854 Persistent Memory Region: Not Supported 00:18:02.854 Optional Asynchronous Events Supported 00:18:02.854 Namespace Attribute Notices: Supported 00:18:02.854 Firmware Activation Notices: Not Supported 00:18:02.854 ANA Change Notices: Not Supported 00:18:02.854 PLE Aggregate Log Change Notices: Not Supported 00:18:02.854 LBA Status Info Alert Notices: Not Supported 00:18:02.854 EGE Aggregate Log Change Notices: Not Supported 00:18:02.854 Normal NVM Subsystem Shutdown event: Not Supported 00:18:02.854 Zone Descriptor Change Notices: Not Supported 00:18:02.854 Discovery Log Change Notices: Not Supported 00:18:02.854 Controller Attributes 00:18:02.854 128-bit Host Identifier: Supported 00:18:02.854 Non-Operational Permissive Mode: Not Supported 00:18:02.854 NVM Sets: Not Supported 00:18:02.854 Read Recovery Levels: Not Supported 00:18:02.854 Endurance Groups: Not Supported 00:18:02.854 Predictable Latency Mode: Not Supported 00:18:02.854 Traffic Based Keep ALive: Not Supported 00:18:02.854 Namespace Granularity: Not Supported 00:18:02.854 SQ Associations: Not Supported 00:18:02.854 UUID List: Not Supported 00:18:02.854 Multi-Domain Subsystem: Not Supported 00:18:02.854 Fixed Capacity Management: Not Supported 00:18:02.854 Variable Capacity Management: Not Supported 00:18:02.854 Delete Endurance Group: Not Supported 00:18:02.854 Delete NVM Set: Not Supported 00:18:02.854 Extended LBA Formats Supported: Not Supported 00:18:02.854 Flexible Data Placement Supported: Not Supported 00:18:02.854 00:18:02.854 Controller Memory Buffer Support 00:18:02.854 ================================ 00:18:02.854 Supported: No 00:18:02.854 00:18:02.854 Persistent Memory Region Support 00:18:02.854 ================================ 00:18:02.854 Supported: No 00:18:02.854 00:18:02.854 Admin Command Set Attributes 00:18:02.854 ============================ 00:18:02.854 Security Send/Receive: Not Supported 00:18:02.854 Format NVM: Not Supported 00:18:02.854 Firmware Activate/Download: Not Supported 00:18:02.854 Namespace Management: Not Supported 00:18:02.854 Device Self-Test: Not Supported 00:18:02.854 Directives: Not Supported 00:18:02.854 NVMe-MI: Not Supported 00:18:02.854 Virtualization Management: Not Supported 00:18:02.854 Doorbell Buffer Config: Not Supported 00:18:02.854 Get LBA Status Capability: Not Supported 00:18:02.854 Command & Feature Lockdown Capability: Not Supported 00:18:02.854 Abort Command Limit: 4 00:18:02.854 Async Event Request Limit: 4 00:18:02.854 Number of Firmware Slots: N/A 00:18:02.854 Firmware Slot 1 Read-Only: N/A 00:18:02.854 Firmware Activation Without Reset: N/A 00:18:02.854 Multiple Update Detection Support: N/A 00:18:02.854 Firmware Update Granularity: No Information Provided 00:18:02.854 Per-Namespace SMART Log: No 00:18:02.854 Asymmetric Namespace Access Log Page: Not Supported 00:18:02.854 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:02.854 Command Effects Log Page: Supported 00:18:02.854 Get Log Page Extended Data: Supported 00:18:02.854 Telemetry Log Pages: Not Supported 00:18:02.854 Persistent Event Log Pages: Not Supported 00:18:02.854 Supported Log Pages Log Page: May Support 00:18:02.854 Commands Supported & Effects Log Page: Not Supported 00:18:02.854 Feature Identifiers & Effects Log Page:May Support 00:18:02.854 NVMe-MI Commands & Effects Log Page: May Support 00:18:02.854 Data Area 4 for Telemetry Log: Not Supported 00:18:02.854 Error Log Page Entries Supported: 128 00:18:02.854 Keep Alive: Supported 00:18:02.854 Keep Alive Granularity: 10000 ms 00:18:02.854 00:18:02.854 NVM Command Set Attributes 00:18:02.854 ========================== 00:18:02.854 Submission Queue Entry Size 00:18:02.854 Max: 64 00:18:02.854 Min: 64 00:18:02.854 Completion Queue Entry Size 00:18:02.854 Max: 16 00:18:02.854 Min: 16 00:18:02.854 Number of Namespaces: 32 00:18:02.854 Compare Command: Supported 00:18:02.854 Write Uncorrectable Command: Not Supported 00:18:02.854 Dataset Management Command: Supported 00:18:02.854 Write Zeroes Command: Supported 00:18:02.854 Set Features Save Field: Not Supported 00:18:02.854 Reservations: Not Supported 00:18:02.854 Timestamp: Not Supported 00:18:02.854 Copy: Supported 00:18:02.854 Volatile Write Cache: Present 00:18:02.854 Atomic Write Unit (Normal): 1 00:18:02.854 Atomic Write Unit (PFail): 1 00:18:02.854 Atomic Compare & Write Unit: 1 00:18:02.854 Fused Compare & Write: Supported 00:18:02.854 Scatter-Gather List 00:18:02.854 SGL Command Set: Supported (Dword aligned) 00:18:02.854 SGL Keyed: Not Supported 00:18:02.854 SGL Bit Bucket Descriptor: Not Supported 00:18:02.854 SGL Metadata Pointer: Not Supported 00:18:02.854 Oversized SGL: Not Supported 00:18:02.854 SGL Metadata Address: Not Supported 00:18:02.854 SGL Offset: Not Supported 00:18:02.854 Transport SGL Data Block: Not Supported 00:18:02.854 Replay Protected Memory Block: Not Supported 00:18:02.854 00:18:02.854 Firmware Slot Information 00:18:02.854 ========================= 00:18:02.854 Active slot: 1 00:18:02.854 Slot 1 Firmware Revision: 25.01 00:18:02.854 00:18:02.854 00:18:02.854 Commands Supported and Effects 00:18:02.854 ============================== 00:18:02.854 Admin Commands 00:18:02.854 -------------- 00:18:02.854 Get Log Page (02h): Supported 00:18:02.854 Identify (06h): Supported 00:18:02.854 Abort (08h): Supported 00:18:02.854 Set Features (09h): Supported 00:18:02.854 Get Features (0Ah): Supported 00:18:02.854 Asynchronous Event Request (0Ch): Supported 00:18:02.854 Keep Alive (18h): Supported 00:18:02.854 I/O Commands 00:18:02.854 ------------ 00:18:02.854 Flush (00h): Supported LBA-Change 00:18:02.854 Write (01h): Supported LBA-Change 00:18:02.854 Read (02h): Supported 00:18:02.854 Compare (05h): Supported 00:18:02.854 Write Zeroes (08h): Supported LBA-Change 00:18:02.854 Dataset Management (09h): Supported LBA-Change 00:18:02.854 Copy (19h): Supported LBA-Change 00:18:02.854 00:18:02.854 Error Log 00:18:02.854 ========= 00:18:02.854 00:18:02.854 Arbitration 00:18:02.854 =========== 00:18:02.854 Arbitration Burst: 1 00:18:02.854 00:18:02.854 Power Management 00:18:02.854 ================ 00:18:02.854 Number of Power States: 1 00:18:02.854 Current Power State: Power State #0 00:18:02.854 Power State #0: 00:18:02.854 Max Power: 0.00 W 00:18:02.854 Non-Operational State: Operational 00:18:02.854 Entry Latency: Not Reported 00:18:02.854 Exit Latency: Not Reported 00:18:02.854 Relative Read Throughput: 0 00:18:02.854 Relative Read Latency: 0 00:18:02.854 Relative Write Throughput: 0 00:18:02.854 Relative Write Latency: 0 00:18:02.854 Idle Power: Not Reported 00:18:02.854 Active Power: Not Reported 00:18:02.854 Non-Operational Permissive Mode: Not Supported 00:18:02.854 00:18:02.854 Health Information 00:18:02.854 ================== 00:18:02.854 Critical Warnings: 00:18:02.854 Available Spare Space: OK 00:18:02.854 Temperature: OK 00:18:02.854 Device Reliability: OK 00:18:02.854 Read Only: No 00:18:02.854 Volatile Memory Backup: OK 00:18:02.854 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:02.854 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:02.854 Available Spare: 0% 00:18:02.854 Available Sp[2024-12-05 13:49:45.241218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:02.854 [2024-12-05 13:49:45.241227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:02.854 [2024-12-05 13:49:45.241252] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:02.854 [2024-12-05 13:49:45.241260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.854 [2024-12-05 13:49:45.241265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.854 [2024-12-05 13:49:45.241271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.854 [2024-12-05 13:49:45.241276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.854 [2024-12-05 13:49:45.243374] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:02.854 [2024-12-05 13:49:45.243385] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:02.854 [2024-12-05 13:49:45.243422] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:02.854 [2024-12-05 13:49:45.243470] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:02.854 [2024-12-05 13:49:45.243476] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:02.854 [2024-12-05 13:49:45.244430] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:02.854 [2024-12-05 13:49:45.244440] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:02.854 [2024-12-05 13:49:45.244487] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:02.854 [2024-12-05 13:49:45.245452] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:02.854 are Threshold: 0% 00:18:02.854 Life Percentage Used: 0% 00:18:02.854 Data Units Read: 0 00:18:02.854 Data Units Written: 0 00:18:02.854 Host Read Commands: 0 00:18:02.854 Host Write Commands: 0 00:18:02.854 Controller Busy Time: 0 minutes 00:18:02.854 Power Cycles: 0 00:18:02.854 Power On Hours: 0 hours 00:18:02.854 Unsafe Shutdowns: 0 00:18:02.854 Unrecoverable Media Errors: 0 00:18:02.854 Lifetime Error Log Entries: 0 00:18:02.854 Warning Temperature Time: 0 minutes 00:18:02.854 Critical Temperature Time: 0 minutes 00:18:02.854 00:18:02.854 Number of Queues 00:18:02.854 ================ 00:18:02.854 Number of I/O Submission Queues: 127 00:18:02.854 Number of I/O Completion Queues: 127 00:18:02.854 00:18:02.854 Active Namespaces 00:18:02.854 ================= 00:18:02.854 Namespace ID:1 00:18:02.854 Error Recovery Timeout: Unlimited 00:18:02.854 Command Set Identifier: NVM (00h) 00:18:02.854 Deallocate: Supported 00:18:02.854 Deallocated/Unwritten Error: Not Supported 00:18:02.854 Deallocated Read Value: Unknown 00:18:02.854 Deallocate in Write Zeroes: Not Supported 00:18:02.854 Deallocated Guard Field: 0xFFFF 00:18:02.854 Flush: Supported 00:18:02.854 Reservation: Supported 00:18:02.854 Namespace Sharing Capabilities: Multiple Controllers 00:18:02.854 Size (in LBAs): 131072 (0GiB) 00:18:02.854 Capacity (in LBAs): 131072 (0GiB) 00:18:02.854 Utilization (in LBAs): 131072 (0GiB) 00:18:02.854 NGUID: 06DEDF8B237F47CEA4C34B8CA6683058 00:18:02.854 UUID: 06dedf8b-237f-47ce-a4c3-4b8ca6683058 00:18:02.854 Thin Provisioning: Not Supported 00:18:02.854 Per-NS Atomic Units: Yes 00:18:02.854 Atomic Boundary Size (Normal): 0 00:18:02.854 Atomic Boundary Size (PFail): 0 00:18:02.854 Atomic Boundary Offset: 0 00:18:02.854 Maximum Single Source Range Length: 65535 00:18:02.854 Maximum Copy Length: 65535 00:18:02.854 Maximum Source Range Count: 1 00:18:02.855 NGUID/EUI64 Never Reused: No 00:18:02.855 Namespace Write Protected: No 00:18:02.855 Number of LBA Formats: 1 00:18:02.855 Current LBA Format: LBA Format #00 00:18:02.855 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:02.855 00:18:02.855 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:03.112 [2024-12-05 13:49:45.476398] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.375 Initializing NVMe Controllers 00:18:08.375 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:08.375 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:08.375 Initialization complete. Launching workers. 00:18:08.375 ======================================================== 00:18:08.375 Latency(us) 00:18:08.375 Device Information : IOPS MiB/s Average min max 00:18:08.375 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39877.55 155.77 3209.66 960.33 7217.60 00:18:08.375 ======================================================== 00:18:08.375 Total : 39877.55 155.77 3209.66 960.33 7217.60 00:18:08.375 00:18:08.375 [2024-12-05 13:49:50.495653] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:08.375 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:08.375 [2024-12-05 13:49:50.730746] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:13.636 Initializing NVMe Controllers 00:18:13.636 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:13.636 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:13.636 Initialization complete. Launching workers. 00:18:13.636 ======================================================== 00:18:13.636 Latency(us) 00:18:13.636 Device Information : IOPS MiB/s Average min max 00:18:13.636 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16039.11 62.65 7979.83 7589.58 7999.62 00:18:13.636 ======================================================== 00:18:13.636 Total : 16039.11 62.65 7979.83 7589.58 7999.62 00:18:13.636 00:18:13.636 [2024-12-05 13:49:55.768765] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:13.636 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:13.636 [2024-12-05 13:49:55.968700] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:18.920 [2024-12-05 13:50:01.040666] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:18.920 Initializing NVMe Controllers 00:18:18.920 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:18.920 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:18.920 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:18.920 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:18.920 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:18.920 Initialization complete. Launching workers. 00:18:18.920 Starting thread on core 2 00:18:18.920 Starting thread on core 3 00:18:18.920 Starting thread on core 1 00:18:18.920 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:18.920 [2024-12-05 13:50:01.339803] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:22.203 [2024-12-05 13:50:04.401325] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:22.203 Initializing NVMe Controllers 00:18:22.203 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:22.203 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:22.203 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:22.203 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:22.203 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:22.203 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:22.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:22.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:22.203 Initialization complete. Launching workers. 00:18:22.203 Starting thread on core 1 with urgent priority queue 00:18:22.203 Starting thread on core 2 with urgent priority queue 00:18:22.203 Starting thread on core 3 with urgent priority queue 00:18:22.203 Starting thread on core 0 with urgent priority queue 00:18:22.203 SPDK bdev Controller (SPDK1 ) core 0: 7144.67 IO/s 14.00 secs/100000 ios 00:18:22.203 SPDK bdev Controller (SPDK1 ) core 1: 7785.00 IO/s 12.85 secs/100000 ios 00:18:22.203 SPDK bdev Controller (SPDK1 ) core 2: 7083.67 IO/s 14.12 secs/100000 ios 00:18:22.203 SPDK bdev Controller (SPDK1 ) core 3: 7893.33 IO/s 12.67 secs/100000 ios 00:18:22.203 ======================================================== 00:18:22.203 00:18:22.203 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:22.203 [2024-12-05 13:50:04.686818] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:22.203 Initializing NVMe Controllers 00:18:22.203 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:22.203 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:22.203 Namespace ID: 1 size: 0GB 00:18:22.203 Initialization complete. 00:18:22.203 INFO: using host memory buffer for IO 00:18:22.203 Hello world! 00:18:22.203 [2024-12-05 13:50:04.720014] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:22.203 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:22.461 [2024-12-05 13:50:05.005782] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:23.856 Initializing NVMe Controllers 00:18:23.856 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:23.856 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:23.856 Initialization complete. Launching workers. 00:18:23.856 submit (in ns) avg, min, max = 7002.6, 3139.0, 3997969.5 00:18:23.856 complete (in ns) avg, min, max = 20522.7, 1724.8, 4993498.1 00:18:23.856 00:18:23.856 Submit histogram 00:18:23.856 ================ 00:18:23.856 Range in us Cumulative Count 00:18:23.857 3.139 - 3.154: 0.0182% ( 3) 00:18:23.857 3.154 - 3.170: 0.0303% ( 2) 00:18:23.857 3.170 - 3.185: 0.0485% ( 3) 00:18:23.857 3.185 - 3.200: 0.2730% ( 37) 00:18:23.857 3.200 - 3.215: 1.9230% ( 272) 00:18:23.857 3.215 - 3.230: 6.8790% ( 817) 00:18:23.857 3.230 - 3.246: 12.1322% ( 866) 00:18:23.857 3.246 - 3.261: 18.4107% ( 1035) 00:18:23.857 3.261 - 3.276: 24.8650% ( 1064) 00:18:23.857 3.276 - 3.291: 30.8766% ( 991) 00:18:23.857 3.291 - 3.307: 36.4453% ( 918) 00:18:23.857 3.307 - 3.322: 42.1535% ( 941) 00:18:23.857 3.322 - 3.337: 48.0073% ( 965) 00:18:23.857 3.337 - 3.352: 53.4607% ( 899) 00:18:23.857 3.352 - 3.368: 60.4550% ( 1153) 00:18:23.857 3.368 - 3.383: 68.0801% ( 1257) 00:18:23.857 3.383 - 3.398: 72.6418% ( 752) 00:18:23.857 3.398 - 3.413: 78.1438% ( 907) 00:18:23.857 3.413 - 3.429: 81.9048% ( 620) 00:18:23.857 3.429 - 3.444: 84.5253% ( 432) 00:18:23.857 3.444 - 3.459: 86.0965% ( 259) 00:18:23.857 3.459 - 3.474: 86.9942% ( 148) 00:18:23.857 3.474 - 3.490: 87.3885% ( 65) 00:18:23.857 3.490 - 3.505: 87.8738% ( 80) 00:18:23.857 3.505 - 3.520: 88.4744% ( 99) 00:18:23.857 3.520 - 3.535: 89.3964% ( 152) 00:18:23.857 3.535 - 3.550: 90.2881% ( 147) 00:18:23.857 3.550 - 3.566: 91.1677% ( 145) 00:18:23.857 3.566 - 3.581: 92.2050% ( 171) 00:18:23.857 3.581 - 3.596: 93.1150% ( 150) 00:18:23.857 3.596 - 3.611: 94.0916% ( 161) 00:18:23.857 3.611 - 3.627: 95.0864% ( 164) 00:18:23.857 3.627 - 3.642: 95.9660% ( 145) 00:18:23.857 3.642 - 3.657: 96.8577% ( 147) 00:18:23.857 3.657 - 3.672: 97.4340% ( 95) 00:18:23.857 3.672 - 3.688: 97.9497% ( 85) 00:18:23.857 3.688 - 3.703: 98.3925% ( 73) 00:18:23.857 3.703 - 3.718: 98.7807% ( 64) 00:18:23.857 3.718 - 3.733: 99.0658% ( 47) 00:18:23.857 3.733 - 3.749: 99.2478% ( 30) 00:18:23.857 3.749 - 3.764: 99.3813% ( 22) 00:18:23.857 3.764 - 3.779: 99.4540% ( 12) 00:18:23.857 3.779 - 3.794: 99.5147% ( 10) 00:18:23.857 3.794 - 3.810: 99.5450% ( 5) 00:18:23.857 3.810 - 3.825: 99.5814% ( 6) 00:18:23.857 3.825 - 3.840: 99.5996% ( 3) 00:18:23.857 3.840 - 3.855: 99.6057% ( 1) 00:18:23.857 3.855 - 3.870: 99.6118% ( 1) 00:18:23.858 3.931 - 3.962: 99.6178% ( 1) 00:18:23.858 3.962 - 3.992: 99.6300% ( 2) 00:18:23.858 4.053 - 4.084: 99.6360% ( 1) 00:18:23.858 4.815 - 4.846: 99.6421% ( 1) 00:18:23.858 5.120 - 5.150: 99.6482% ( 1) 00:18:23.858 5.181 - 5.211: 99.6542% ( 1) 00:18:23.858 5.242 - 5.272: 99.6603% ( 1) 00:18:23.858 5.272 - 5.303: 99.6664% ( 1) 00:18:23.858 5.364 - 5.394: 99.6724% ( 1) 00:18:23.858 5.394 - 5.425: 99.6785% ( 1) 00:18:23.858 5.455 - 5.486: 99.6846% ( 1) 00:18:23.858 5.516 - 5.547: 99.6906% ( 1) 00:18:23.858 5.608 - 5.638: 99.6967% ( 1) 00:18:23.858 5.638 - 5.669: 99.7028% ( 1) 00:18:23.858 5.669 - 5.699: 99.7088% ( 1) 00:18:23.858 5.699 - 5.730: 99.7149% ( 1) 00:18:23.858 5.730 - 5.760: 99.7270% ( 2) 00:18:23.858 5.760 - 5.790: 99.7392% ( 2) 00:18:23.858 5.790 - 5.821: 99.7452% ( 1) 00:18:23.858 5.821 - 5.851: 99.7513% ( 1) 00:18:23.858 5.851 - 5.882: 99.7634% ( 2) 00:18:23.858 5.882 - 5.912: 99.7816% ( 3) 00:18:23.858 5.943 - 5.973: 99.7877% ( 1) 00:18:23.858 6.004 - 6.034: 99.7938% ( 1) 00:18:23.858 6.065 - 6.095: 99.7998% ( 1) 00:18:23.858 6.126 - 6.156: 99.8059% ( 1) 00:18:23.858 6.217 - 6.248: 99.8120% ( 1) 00:18:23.858 6.248 - 6.278: 99.8180% ( 1) 00:18:23.858 6.278 - 6.309: 99.8241% ( 1) 00:18:23.858 6.309 - 6.339: 99.8301% ( 1) 00:18:23.858 6.339 - 6.370: 99.8362% ( 1) 00:18:23.858 [2024-12-05 13:50:06.028719] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:23.858 6.430 - 6.461: 99.8423% ( 1) 00:18:23.859 6.644 - 6.674: 99.8483% ( 1) 00:18:23.859 6.735 - 6.766: 99.8544% ( 1) 00:18:23.859 7.314 - 7.345: 99.8605% ( 1) 00:18:23.859 7.467 - 7.497: 99.8665% ( 1) 00:18:23.859 7.558 - 7.589: 99.8726% ( 1) 00:18:23.859 7.650 - 7.680: 99.8787% ( 1) 00:18:23.859 7.680 - 7.710: 99.8847% ( 1) 00:18:23.859 8.533 - 8.594: 99.8908% ( 1) 00:18:23.859 9.935 - 9.996: 99.8969% ( 1) 00:18:23.859 10.606 - 10.667: 99.9029% ( 1) 00:18:23.859 13.166 - 13.227: 99.9090% ( 1) 00:18:23.859 3994.575 - 4025.783: 100.0000% ( 15) 00:18:23.859 00:18:23.859 Complete histogram 00:18:23.859 ================== 00:18:23.859 Range in us Cumulative Count 00:18:23.859 1.722 - 1.730: 0.0061% ( 1) 00:18:23.859 1.730 - 1.737: 0.0789% ( 12) 00:18:23.859 1.737 - 1.745: 0.1335% ( 9) 00:18:23.859 1.745 - 1.752: 0.1517% ( 3) 00:18:23.859 1.760 - 1.768: 0.1699% ( 3) 00:18:23.859 1.768 - 1.775: 1.0980% ( 153) 00:18:23.859 1.775 - 1.783: 10.9857% ( 1630) 00:18:23.871 1.783 - 1.790: 39.0112% ( 4620) 00:18:23.871 1.790 - 1.798: 67.3946% ( 4679) 00:18:23.871 1.798 - 1.806: 78.5017% ( 1831) 00:18:23.871 1.806 - 1.813: 82.3476% ( 634) 00:18:23.871 1.813 - 1.821: 85.1259% ( 458) 00:18:23.871 1.821 - 1.829: 86.9093% ( 294) 00:18:23.871 1.829 - 1.836: 88.6260% ( 283) 00:18:23.871 1.836 - 1.844: 91.3497% ( 449) 00:18:23.871 1.844 - 1.851: 94.0916% ( 452) 00:18:23.871 1.851 - 1.859: 95.8265% ( 286) 00:18:23.871 1.859 - 1.867: 97.1489% ( 218) 00:18:23.871 1.867 - 1.874: 98.2166% ( 176) 00:18:23.871 1.874 - 1.882: 98.7686% ( 91) 00:18:23.871 1.882 - 1.890: 98.9263% ( 26) 00:18:23.871 1.890 - 1.897: 99.0294% ( 17) 00:18:23.871 1.897 - 1.905: 99.0719% ( 7) 00:18:23.871 1.905 - 1.912: 99.1204% ( 8) 00:18:23.871 1.912 - 1.920: 99.1871% ( 11) 00:18:23.871 1.920 - 1.928: 99.2660% ( 13) 00:18:23.871 1.928 - 1.935: 99.2903% ( 4) 00:18:23.871 1.950 - 1.966: 99.3024% ( 2) 00:18:23.871 1.966 - 1.981: 99.3085% ( 1) 00:18:23.871 2.011 - 2.027: 99.3145% ( 1) 00:18:23.871 2.042 - 2.057: 99.3267% ( 2) 00:18:23.871 2.072 - 2.088: 99.3327% ( 1) 00:18:23.871 2.133 - 2.149: 99.3388% ( 1) 00:18:23.871 2.149 - 2.164: 99.3509% ( 2) 00:18:23.871 2.164 - 2.179: 99.3813% ( 5) 00:18:23.871 2.210 - 2.225: 99.3873% ( 1) 00:18:23.871 2.270 - 2.286: 99.3934% ( 1) 00:18:23.871 3.185 - 3.200: 99.3995% ( 1) 00:18:23.871 3.444 - 3.459: 99.4055% ( 1) 00:18:23.871 3.490 - 3.505: 99.4116% ( 1) 00:18:23.871 3.611 - 3.627: 99.4177% ( 1) 00:18:23.871 3.627 - 3.642: 99.4237% ( 1) 00:18:23.871 3.672 - 3.688: 99.4298% ( 1) 00:18:23.871 3.992 - 4.023: 99.4419% ( 2) 00:18:23.871 4.114 - 4.145: 99.4480% ( 1) 00:18:23.871 4.206 - 4.236: 99.4601% ( 2) 00:18:23.871 4.236 - 4.267: 99.4662% ( 1) 00:18:23.871 4.419 - 4.450: 99.4722% ( 1) 00:18:23.871 4.571 - 4.602: 99.4783% ( 1) 00:18:23.871 4.663 - 4.693: 99.4844% ( 1) 00:18:23.871 4.846 - 4.876: 99.4904% ( 1) 00:18:23.872 4.876 - 4.907: 99.4965% ( 1) 00:18:23.872 5.059 - 5.090: 99.5026% ( 1) 00:18:23.872 5.090 - 5.120: 99.5086% ( 1) 00:18:23.872 5.181 - 5.211: 99.5147% ( 1) 00:18:23.872 5.242 - 5.272: 99.5208% ( 1) 00:18:23.872 13.653 - 13.714: 99.5268% ( 1) 00:18:23.872 38.522 - 38.766: 99.5329% ( 1) 00:18:23.872 3994.575 - 4025.783: 99.9939% ( 76) 00:18:23.872 4993.219 - 5024.427: 100.0000% ( 1) 00:18:23.872 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:23.872 [ 00:18:23.872 { 00:18:23.872 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:23.872 "subtype": "Discovery", 00:18:23.872 "listen_addresses": [], 00:18:23.872 "allow_any_host": true, 00:18:23.872 "hosts": [] 00:18:23.872 }, 00:18:23.872 { 00:18:23.872 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:23.872 "subtype": "NVMe", 00:18:23.872 "listen_addresses": [ 00:18:23.872 { 00:18:23.872 "trtype": "VFIOUSER", 00:18:23.872 "adrfam": "IPv4", 00:18:23.872 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:23.872 "trsvcid": "0" 00:18:23.872 } 00:18:23.872 ], 00:18:23.872 "allow_any_host": true, 00:18:23.872 "hosts": [], 00:18:23.872 "serial_number": "SPDK1", 00:18:23.872 "model_number": "SPDK bdev Controller", 00:18:23.872 "max_namespaces": 32, 00:18:23.872 "min_cntlid": 1, 00:18:23.872 "max_cntlid": 65519, 00:18:23.872 "namespaces": [ 00:18:23.872 { 00:18:23.872 "nsid": 1, 00:18:23.872 "bdev_name": "Malloc1", 00:18:23.872 "name": "Malloc1", 00:18:23.872 "nguid": "06DEDF8B237F47CEA4C34B8CA6683058", 00:18:23.872 "uuid": "06dedf8b-237f-47ce-a4c3-4b8ca6683058" 00:18:23.872 } 00:18:23.872 ] 00:18:23.872 }, 00:18:23.872 { 00:18:23.872 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:23.872 "subtype": "NVMe", 00:18:23.872 "listen_addresses": [ 00:18:23.872 { 00:18:23.872 "trtype": "VFIOUSER", 00:18:23.872 "adrfam": "IPv4", 00:18:23.872 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:23.872 "trsvcid": "0" 00:18:23.872 } 00:18:23.872 ], 00:18:23.872 "allow_any_host": true, 00:18:23.872 "hosts": [], 00:18:23.872 "serial_number": "SPDK2", 00:18:23.872 "model_number": "SPDK bdev Controller", 00:18:23.872 "max_namespaces": 32, 00:18:23.872 "min_cntlid": 1, 00:18:23.872 "max_cntlid": 65519, 00:18:23.872 "namespaces": [ 00:18:23.872 { 00:18:23.872 "nsid": 1, 00:18:23.872 "bdev_name": "Malloc2", 00:18:23.872 "name": "Malloc2", 00:18:23.872 "nguid": "13215D5101FE4B55A9EA4771104AB52D", 00:18:23.872 "uuid": "13215d51-01fe-4b55-a9ea-4771104ab52d" 00:18:23.872 } 00:18:23.872 ] 00:18:23.872 } 00:18:23.872 ] 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=633507 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:23.872 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:24.136 [2024-12-05 13:50:06.441791] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:24.136 Malloc3 00:18:24.136 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:24.136 [2024-12-05 13:50:06.668481] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:24.136 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:24.136 Asynchronous Event Request test 00:18:24.136 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:24.136 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:24.136 Registering asynchronous event callbacks... 00:18:24.136 Starting namespace attribute notice tests for all controllers... 00:18:24.136 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:24.136 aer_cb - Changed Namespace 00:18:24.136 Cleaning up... 00:18:24.393 [ 00:18:24.393 { 00:18:24.393 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:24.393 "subtype": "Discovery", 00:18:24.393 "listen_addresses": [], 00:18:24.393 "allow_any_host": true, 00:18:24.393 "hosts": [] 00:18:24.393 }, 00:18:24.393 { 00:18:24.393 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:24.393 "subtype": "NVMe", 00:18:24.393 "listen_addresses": [ 00:18:24.393 { 00:18:24.393 "trtype": "VFIOUSER", 00:18:24.393 "adrfam": "IPv4", 00:18:24.393 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:24.393 "trsvcid": "0" 00:18:24.393 } 00:18:24.393 ], 00:18:24.393 "allow_any_host": true, 00:18:24.393 "hosts": [], 00:18:24.393 "serial_number": "SPDK1", 00:18:24.393 "model_number": "SPDK bdev Controller", 00:18:24.393 "max_namespaces": 32, 00:18:24.393 "min_cntlid": 1, 00:18:24.393 "max_cntlid": 65519, 00:18:24.393 "namespaces": [ 00:18:24.393 { 00:18:24.393 "nsid": 1, 00:18:24.393 "bdev_name": "Malloc1", 00:18:24.393 "name": "Malloc1", 00:18:24.393 "nguid": "06DEDF8B237F47CEA4C34B8CA6683058", 00:18:24.393 "uuid": "06dedf8b-237f-47ce-a4c3-4b8ca6683058" 00:18:24.393 }, 00:18:24.393 { 00:18:24.393 "nsid": 2, 00:18:24.393 "bdev_name": "Malloc3", 00:18:24.393 "name": "Malloc3", 00:18:24.393 "nguid": "ADD92C430B7B4134A3C4EB6A2622F330", 00:18:24.393 "uuid": "add92c43-0b7b-4134-a3c4-eb6a2622f330" 00:18:24.393 } 00:18:24.393 ] 00:18:24.393 }, 00:18:24.393 { 00:18:24.393 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:24.393 "subtype": "NVMe", 00:18:24.393 "listen_addresses": [ 00:18:24.393 { 00:18:24.393 "trtype": "VFIOUSER", 00:18:24.393 "adrfam": "IPv4", 00:18:24.393 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:24.393 "trsvcid": "0" 00:18:24.393 } 00:18:24.393 ], 00:18:24.393 "allow_any_host": true, 00:18:24.393 "hosts": [], 00:18:24.393 "serial_number": "SPDK2", 00:18:24.393 "model_number": "SPDK bdev Controller", 00:18:24.393 "max_namespaces": 32, 00:18:24.393 "min_cntlid": 1, 00:18:24.393 "max_cntlid": 65519, 00:18:24.393 "namespaces": [ 00:18:24.393 { 00:18:24.393 "nsid": 1, 00:18:24.393 "bdev_name": "Malloc2", 00:18:24.393 "name": "Malloc2", 00:18:24.393 "nguid": "13215D5101FE4B55A9EA4771104AB52D", 00:18:24.393 "uuid": "13215d51-01fe-4b55-a9ea-4771104ab52d" 00:18:24.393 } 00:18:24.393 ] 00:18:24.393 } 00:18:24.393 ] 00:18:24.393 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 633507 00:18:24.393 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:24.393 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:24.393 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:24.393 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:24.393 [2024-12-05 13:50:06.904616] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:24.393 [2024-12-05 13:50:06.904650] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633523 ] 00:18:24.393 [2024-12-05 13:50:06.943695] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:24.393 [2024-12-05 13:50:06.948928] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:24.393 [2024-12-05 13:50:06.948950] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa3fa700000 00:18:24.393 [2024-12-05 13:50:06.949934] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:24.393 [2024-12-05 13:50:06.950939] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:24.393 [2024-12-05 13:50:06.951941] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:24.393 [2024-12-05 13:50:06.952946] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:24.393 [2024-12-05 13:50:06.953948] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:24.393 [2024-12-05 13:50:06.954955] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:24.393 [2024-12-05 13:50:06.955963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:24.393 [2024-12-05 13:50:06.956964] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:24.393 [2024-12-05 13:50:06.957976] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:24.393 [2024-12-05 13:50:06.957986] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa3fa6f5000 00:18:24.393 [2024-12-05 13:50:06.958897] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:24.393 [2024-12-05 13:50:06.968253] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:24.393 [2024-12-05 13:50:06.968275] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:24.393 [2024-12-05 13:50:06.973361] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:24.393 [2024-12-05 13:50:06.973399] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:24.393 [2024-12-05 13:50:06.973467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:24.393 [2024-12-05 13:50:06.973480] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:24.393 [2024-12-05 13:50:06.973485] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:24.393 [2024-12-05 13:50:06.974365] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:24.393 [2024-12-05 13:50:06.974383] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:24.393 [2024-12-05 13:50:06.974390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:24.393 [2024-12-05 13:50:06.975370] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:24.393 [2024-12-05 13:50:06.975378] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:24.393 [2024-12-05 13:50:06.975385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:24.393 [2024-12-05 13:50:06.976381] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:24.394 [2024-12-05 13:50:06.976390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:24.394 [2024-12-05 13:50:06.977382] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:24.394 [2024-12-05 13:50:06.977391] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:24.394 [2024-12-05 13:50:06.977395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:24.394 [2024-12-05 13:50:06.977402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:24.394 [2024-12-05 13:50:06.977509] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:24.394 [2024-12-05 13:50:06.977513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:24.394 [2024-12-05 13:50:06.977518] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:24.652 [2024-12-05 13:50:06.978388] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:24.653 [2024-12-05 13:50:06.979392] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:24.653 [2024-12-05 13:50:06.980397] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:24.653 [2024-12-05 13:50:06.981400] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:24.653 [2024-12-05 13:50:06.981438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:24.653 [2024-12-05 13:50:06.982408] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:24.653 [2024-12-05 13:50:06.982416] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:24.653 [2024-12-05 13:50:06.982421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:06.982437] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:24.653 [2024-12-05 13:50:06.982444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:06.982458] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:24.653 [2024-12-05 13:50:06.982462] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:24.653 [2024-12-05 13:50:06.982466] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.653 [2024-12-05 13:50:06.982477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:06.991376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:06.991387] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:24.653 [2024-12-05 13:50:06.991392] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:24.653 [2024-12-05 13:50:06.991396] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:24.653 [2024-12-05 13:50:06.991403] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:24.653 [2024-12-05 13:50:06.991407] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:24.653 [2024-12-05 13:50:06.991411] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:24.653 [2024-12-05 13:50:06.991416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:06.991423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:06.991432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:06.999372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:06.999383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.653 [2024-12-05 13:50:06.999390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.653 [2024-12-05 13:50:06.999398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.653 [2024-12-05 13:50:06.999405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.653 [2024-12-05 13:50:06.999409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:06.999418] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:06.999426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:07.007372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:07.007379] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:24.653 [2024-12-05 13:50:07.007384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.007394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.007399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.007408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:07.015371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:07.015424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.015432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.015439] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:24.653 [2024-12-05 13:50:07.015445] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:24.653 [2024-12-05 13:50:07.015448] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.653 [2024-12-05 13:50:07.015454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:07.023372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:07.023384] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:24.653 [2024-12-05 13:50:07.023392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.023398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.023405] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:24.653 [2024-12-05 13:50:07.023409] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:24.653 [2024-12-05 13:50:07.023412] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.653 [2024-12-05 13:50:07.023418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:07.031373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:07.031384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.031391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.031398] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:24.653 [2024-12-05 13:50:07.031402] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:24.653 [2024-12-05 13:50:07.031405] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.653 [2024-12-05 13:50:07.031411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:07.039373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:07.039384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.039390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.039396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.039402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.039406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.039411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.039415] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:24.653 [2024-12-05 13:50:07.039421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:24.653 [2024-12-05 13:50:07.039426] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:24.653 [2024-12-05 13:50:07.039441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:07.047371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:07.047383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:07.055371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:07.055383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:07.063371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:07.063383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:07.071373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:07.071387] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:24.653 [2024-12-05 13:50:07.071391] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:24.653 [2024-12-05 13:50:07.071395] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:24.653 [2024-12-05 13:50:07.071398] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:24.653 [2024-12-05 13:50:07.071400] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:24.653 [2024-12-05 13:50:07.071406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:24.653 [2024-12-05 13:50:07.071413] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:24.653 [2024-12-05 13:50:07.071416] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:24.653 [2024-12-05 13:50:07.071420] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.653 [2024-12-05 13:50:07.071425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:07.071431] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:24.653 [2024-12-05 13:50:07.071435] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:24.653 [2024-12-05 13:50:07.071438] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.653 [2024-12-05 13:50:07.071443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:07.071449] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:24.653 [2024-12-05 13:50:07.071453] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:24.653 [2024-12-05 13:50:07.071456] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.653 [2024-12-05 13:50:07.071461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:24.653 [2024-12-05 13:50:07.079371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:07.079385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:07.079394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:24.653 [2024-12-05 13:50:07.079401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:24.653 ===================================================== 00:18:24.653 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:24.653 ===================================================== 00:18:24.653 Controller Capabilities/Features 00:18:24.653 ================================ 00:18:24.653 Vendor ID: 4e58 00:18:24.653 Subsystem Vendor ID: 4e58 00:18:24.653 Serial Number: SPDK2 00:18:24.653 Model Number: SPDK bdev Controller 00:18:24.653 Firmware Version: 25.01 00:18:24.653 Recommended Arb Burst: 6 00:18:24.653 IEEE OUI Identifier: 8d 6b 50 00:18:24.653 Multi-path I/O 00:18:24.653 May have multiple subsystem ports: Yes 00:18:24.653 May have multiple controllers: Yes 00:18:24.653 Associated with SR-IOV VF: No 00:18:24.653 Max Data Transfer Size: 131072 00:18:24.653 Max Number of Namespaces: 32 00:18:24.653 Max Number of I/O Queues: 127 00:18:24.653 NVMe Specification Version (VS): 1.3 00:18:24.653 NVMe Specification Version (Identify): 1.3 00:18:24.653 Maximum Queue Entries: 256 00:18:24.653 Contiguous Queues Required: Yes 00:18:24.653 Arbitration Mechanisms Supported 00:18:24.653 Weighted Round Robin: Not Supported 00:18:24.653 Vendor Specific: Not Supported 00:18:24.653 Reset Timeout: 15000 ms 00:18:24.653 Doorbell Stride: 4 bytes 00:18:24.653 NVM Subsystem Reset: Not Supported 00:18:24.653 Command Sets Supported 00:18:24.653 NVM Command Set: Supported 00:18:24.653 Boot Partition: Not Supported 00:18:24.653 Memory Page Size Minimum: 4096 bytes 00:18:24.653 Memory Page Size Maximum: 4096 bytes 00:18:24.653 Persistent Memory Region: Not Supported 00:18:24.653 Optional Asynchronous Events Supported 00:18:24.653 Namespace Attribute Notices: Supported 00:18:24.653 Firmware Activation Notices: Not Supported 00:18:24.653 ANA Change Notices: Not Supported 00:18:24.653 PLE Aggregate Log Change Notices: Not Supported 00:18:24.653 LBA Status Info Alert Notices: Not Supported 00:18:24.653 EGE Aggregate Log Change Notices: Not Supported 00:18:24.653 Normal NVM Subsystem Shutdown event: Not Supported 00:18:24.653 Zone Descriptor Change Notices: Not Supported 00:18:24.653 Discovery Log Change Notices: Not Supported 00:18:24.653 Controller Attributes 00:18:24.653 128-bit Host Identifier: Supported 00:18:24.653 Non-Operational Permissive Mode: Not Supported 00:18:24.653 NVM Sets: Not Supported 00:18:24.653 Read Recovery Levels: Not Supported 00:18:24.653 Endurance Groups: Not Supported 00:18:24.653 Predictable Latency Mode: Not Supported 00:18:24.653 Traffic Based Keep ALive: Not Supported 00:18:24.653 Namespace Granularity: Not Supported 00:18:24.653 SQ Associations: Not Supported 00:18:24.653 UUID List: Not Supported 00:18:24.653 Multi-Domain Subsystem: Not Supported 00:18:24.653 Fixed Capacity Management: Not Supported 00:18:24.653 Variable Capacity Management: Not Supported 00:18:24.653 Delete Endurance Group: Not Supported 00:18:24.653 Delete NVM Set: Not Supported 00:18:24.653 Extended LBA Formats Supported: Not Supported 00:18:24.653 Flexible Data Placement Supported: Not Supported 00:18:24.653 00:18:24.653 Controller Memory Buffer Support 00:18:24.653 ================================ 00:18:24.653 Supported: No 00:18:24.653 00:18:24.653 Persistent Memory Region Support 00:18:24.653 ================================ 00:18:24.653 Supported: No 00:18:24.653 00:18:24.653 Admin Command Set Attributes 00:18:24.653 ============================ 00:18:24.653 Security Send/Receive: Not Supported 00:18:24.653 Format NVM: Not Supported 00:18:24.653 Firmware Activate/Download: Not Supported 00:18:24.653 Namespace Management: Not Supported 00:18:24.653 Device Self-Test: Not Supported 00:18:24.653 Directives: Not Supported 00:18:24.653 NVMe-MI: Not Supported 00:18:24.653 Virtualization Management: Not Supported 00:18:24.653 Doorbell Buffer Config: Not Supported 00:18:24.653 Get LBA Status Capability: Not Supported 00:18:24.653 Command & Feature Lockdown Capability: Not Supported 00:18:24.653 Abort Command Limit: 4 00:18:24.653 Async Event Request Limit: 4 00:18:24.653 Number of Firmware Slots: N/A 00:18:24.653 Firmware Slot 1 Read-Only: N/A 00:18:24.653 Firmware Activation Without Reset: N/A 00:18:24.653 Multiple Update Detection Support: N/A 00:18:24.653 Firmware Update Granularity: No Information Provided 00:18:24.653 Per-Namespace SMART Log: No 00:18:24.653 Asymmetric Namespace Access Log Page: Not Supported 00:18:24.653 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:24.653 Command Effects Log Page: Supported 00:18:24.653 Get Log Page Extended Data: Supported 00:18:24.653 Telemetry Log Pages: Not Supported 00:18:24.653 Persistent Event Log Pages: Not Supported 00:18:24.653 Supported Log Pages Log Page: May Support 00:18:24.653 Commands Supported & Effects Log Page: Not Supported 00:18:24.653 Feature Identifiers & Effects Log Page:May Support 00:18:24.653 NVMe-MI Commands & Effects Log Page: May Support 00:18:24.653 Data Area 4 for Telemetry Log: Not Supported 00:18:24.653 Error Log Page Entries Supported: 128 00:18:24.653 Keep Alive: Supported 00:18:24.653 Keep Alive Granularity: 10000 ms 00:18:24.653 00:18:24.653 NVM Command Set Attributes 00:18:24.653 ========================== 00:18:24.653 Submission Queue Entry Size 00:18:24.653 Max: 64 00:18:24.653 Min: 64 00:18:24.653 Completion Queue Entry Size 00:18:24.653 Max: 16 00:18:24.653 Min: 16 00:18:24.653 Number of Namespaces: 32 00:18:24.653 Compare Command: Supported 00:18:24.653 Write Uncorrectable Command: Not Supported 00:18:24.653 Dataset Management Command: Supported 00:18:24.653 Write Zeroes Command: Supported 00:18:24.653 Set Features Save Field: Not Supported 00:18:24.653 Reservations: Not Supported 00:18:24.653 Timestamp: Not Supported 00:18:24.653 Copy: Supported 00:18:24.653 Volatile Write Cache: Present 00:18:24.653 Atomic Write Unit (Normal): 1 00:18:24.654 Atomic Write Unit (PFail): 1 00:18:24.654 Atomic Compare & Write Unit: 1 00:18:24.654 Fused Compare & Write: Supported 00:18:24.654 Scatter-Gather List 00:18:24.654 SGL Command Set: Supported (Dword aligned) 00:18:24.654 SGL Keyed: Not Supported 00:18:24.654 SGL Bit Bucket Descriptor: Not Supported 00:18:24.654 SGL Metadata Pointer: Not Supported 00:18:24.654 Oversized SGL: Not Supported 00:18:24.654 SGL Metadata Address: Not Supported 00:18:24.654 SGL Offset: Not Supported 00:18:24.654 Transport SGL Data Block: Not Supported 00:18:24.654 Replay Protected Memory Block: Not Supported 00:18:24.654 00:18:24.654 Firmware Slot Information 00:18:24.654 ========================= 00:18:24.654 Active slot: 1 00:18:24.654 Slot 1 Firmware Revision: 25.01 00:18:24.654 00:18:24.654 00:18:24.654 Commands Supported and Effects 00:18:24.654 ============================== 00:18:24.654 Admin Commands 00:18:24.654 -------------- 00:18:24.654 Get Log Page (02h): Supported 00:18:24.654 Identify (06h): Supported 00:18:24.654 Abort (08h): Supported 00:18:24.654 Set Features (09h): Supported 00:18:24.654 Get Features (0Ah): Supported 00:18:24.654 Asynchronous Event Request (0Ch): Supported 00:18:24.654 Keep Alive (18h): Supported 00:18:24.654 I/O Commands 00:18:24.654 ------------ 00:18:24.654 Flush (00h): Supported LBA-Change 00:18:24.654 Write (01h): Supported LBA-Change 00:18:24.654 Read (02h): Supported 00:18:24.654 Compare (05h): Supported 00:18:24.654 Write Zeroes (08h): Supported LBA-Change 00:18:24.654 Dataset Management (09h): Supported LBA-Change 00:18:24.654 Copy (19h): Supported LBA-Change 00:18:24.654 00:18:24.654 Error Log 00:18:24.654 ========= 00:18:24.654 00:18:24.654 Arbitration 00:18:24.654 =========== 00:18:24.654 Arbitration Burst: 1 00:18:24.654 00:18:24.654 Power Management 00:18:24.654 ================ 00:18:24.654 Number of Power States: 1 00:18:24.654 Current Power State: Power State #0 00:18:24.654 Power State #0: 00:18:24.654 Max Power: 0.00 W 00:18:24.654 Non-Operational State: Operational 00:18:24.654 Entry Latency: Not Reported 00:18:24.654 Exit Latency: Not Reported 00:18:24.654 Relative Read Throughput: 0 00:18:24.654 Relative Read Latency: 0 00:18:24.654 Relative Write Throughput: 0 00:18:24.654 Relative Write Latency: 0 00:18:24.654 Idle Power: Not Reported 00:18:24.654 Active Power: Not Reported 00:18:24.654 Non-Operational Permissive Mode: Not Supported 00:18:24.654 00:18:24.654 Health Information 00:18:24.654 ================== 00:18:24.654 Critical Warnings: 00:18:24.654 Available Spare Space: OK 00:18:24.654 Temperature: OK 00:18:24.654 Device Reliability: OK 00:18:24.654 Read Only: No 00:18:24.654 Volatile Memory Backup: OK 00:18:24.654 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:24.654 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:24.654 Available Spare: 0% 00:18:24.654 Available Sp[2024-12-05 13:50:07.079487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:24.654 [2024-12-05 13:50:07.087372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:24.654 [2024-12-05 13:50:07.087401] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:24.654 [2024-12-05 13:50:07.087409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.654 [2024-12-05 13:50:07.087415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.654 [2024-12-05 13:50:07.087421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.654 [2024-12-05 13:50:07.087426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.654 [2024-12-05 13:50:07.087482] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:24.654 [2024-12-05 13:50:07.087492] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:24.654 [2024-12-05 13:50:07.088570] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:24.654 [2024-12-05 13:50:07.088613] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:24.654 [2024-12-05 13:50:07.088620] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:24.654 [2024-12-05 13:50:07.089583] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:24.654 [2024-12-05 13:50:07.089594] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:24.654 [2024-12-05 13:50:07.089639] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:24.654 [2024-12-05 13:50:07.090602] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:24.654 are Threshold: 0% 00:18:24.654 Life Percentage Used: 0% 00:18:24.654 Data Units Read: 0 00:18:24.654 Data Units Written: 0 00:18:24.654 Host Read Commands: 0 00:18:24.654 Host Write Commands: 0 00:18:24.654 Controller Busy Time: 0 minutes 00:18:24.654 Power Cycles: 0 00:18:24.654 Power On Hours: 0 hours 00:18:24.654 Unsafe Shutdowns: 0 00:18:24.654 Unrecoverable Media Errors: 0 00:18:24.654 Lifetime Error Log Entries: 0 00:18:24.654 Warning Temperature Time: 0 minutes 00:18:24.654 Critical Temperature Time: 0 minutes 00:18:24.654 00:18:24.654 Number of Queues 00:18:24.654 ================ 00:18:24.654 Number of I/O Submission Queues: 127 00:18:24.654 Number of I/O Completion Queues: 127 00:18:24.654 00:18:24.654 Active Namespaces 00:18:24.654 ================= 00:18:24.654 Namespace ID:1 00:18:24.654 Error Recovery Timeout: Unlimited 00:18:24.654 Command Set Identifier: NVM (00h) 00:18:24.654 Deallocate: Supported 00:18:24.654 Deallocated/Unwritten Error: Not Supported 00:18:24.654 Deallocated Read Value: Unknown 00:18:24.654 Deallocate in Write Zeroes: Not Supported 00:18:24.654 Deallocated Guard Field: 0xFFFF 00:18:24.654 Flush: Supported 00:18:24.654 Reservation: Supported 00:18:24.654 Namespace Sharing Capabilities: Multiple Controllers 00:18:24.654 Size (in LBAs): 131072 (0GiB) 00:18:24.654 Capacity (in LBAs): 131072 (0GiB) 00:18:24.654 Utilization (in LBAs): 131072 (0GiB) 00:18:24.654 NGUID: 13215D5101FE4B55A9EA4771104AB52D 00:18:24.654 UUID: 13215d51-01fe-4b55-a9ea-4771104ab52d 00:18:24.654 Thin Provisioning: Not Supported 00:18:24.654 Per-NS Atomic Units: Yes 00:18:24.654 Atomic Boundary Size (Normal): 0 00:18:24.654 Atomic Boundary Size (PFail): 0 00:18:24.654 Atomic Boundary Offset: 0 00:18:24.654 Maximum Single Source Range Length: 65535 00:18:24.654 Maximum Copy Length: 65535 00:18:24.654 Maximum Source Range Count: 1 00:18:24.654 NGUID/EUI64 Never Reused: No 00:18:24.654 Namespace Write Protected: No 00:18:24.654 Number of LBA Formats: 1 00:18:24.654 Current LBA Format: LBA Format #00 00:18:24.654 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:24.654 00:18:24.654 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:24.910 [2024-12-05 13:50:07.323602] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.160 Initializing NVMe Controllers 00:18:30.160 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:30.160 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:30.160 Initialization complete. Launching workers. 00:18:30.160 ======================================================== 00:18:30.160 Latency(us) 00:18:30.160 Device Information : IOPS MiB/s Average min max 00:18:30.160 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39934.59 155.99 3204.84 968.50 6651.74 00:18:30.160 ======================================================== 00:18:30.160 Total : 39934.59 155.99 3204.84 968.50 6651.74 00:18:30.160 00:18:30.160 [2024-12-05 13:50:12.429634] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:30.160 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:30.160 [2024-12-05 13:50:12.662302] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:35.414 Initializing NVMe Controllers 00:18:35.414 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:35.414 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:35.414 Initialization complete. Launching workers. 00:18:35.414 ======================================================== 00:18:35.414 Latency(us) 00:18:35.414 Device Information : IOPS MiB/s Average min max 00:18:35.414 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39935.03 156.00 3205.04 967.26 7347.76 00:18:35.414 ======================================================== 00:18:35.414 Total : 39935.03 156.00 3205.04 967.26 7347.76 00:18:35.414 00:18:35.414 [2024-12-05 13:50:17.683763] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:35.414 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:35.414 [2024-12-05 13:50:17.897026] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:40.679 [2024-12-05 13:50:23.030463] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:40.679 Initializing NVMe Controllers 00:18:40.679 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:40.679 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:40.679 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:40.680 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:40.680 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:40.680 Initialization complete. Launching workers. 00:18:40.680 Starting thread on core 2 00:18:40.680 Starting thread on core 3 00:18:40.680 Starting thread on core 1 00:18:40.680 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:40.938 [2024-12-05 13:50:23.326822] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:44.256 [2024-12-05 13:50:26.401277] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:44.256 Initializing NVMe Controllers 00:18:44.256 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:44.256 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:44.256 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:44.256 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:44.256 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:44.256 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:44.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:44.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:44.256 Initialization complete. Launching workers. 00:18:44.256 Starting thread on core 1 with urgent priority queue 00:18:44.256 Starting thread on core 2 with urgent priority queue 00:18:44.256 Starting thread on core 3 with urgent priority queue 00:18:44.256 Starting thread on core 0 with urgent priority queue 00:18:44.256 SPDK bdev Controller (SPDK2 ) core 0: 8880.00 IO/s 11.26 secs/100000 ios 00:18:44.256 SPDK bdev Controller (SPDK2 ) core 1: 9096.00 IO/s 10.99 secs/100000 ios 00:18:44.256 SPDK bdev Controller (SPDK2 ) core 2: 9891.00 IO/s 10.11 secs/100000 ios 00:18:44.256 SPDK bdev Controller (SPDK2 ) core 3: 9758.67 IO/s 10.25 secs/100000 ios 00:18:44.256 ======================================================== 00:18:44.257 00:18:44.257 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:44.257 [2024-12-05 13:50:26.689767] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:44.257 Initializing NVMe Controllers 00:18:44.257 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:44.257 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:44.257 Namespace ID: 1 size: 0GB 00:18:44.257 Initialization complete. 00:18:44.257 INFO: using host memory buffer for IO 00:18:44.257 Hello world! 00:18:44.257 [2024-12-05 13:50:26.698831] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:44.257 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:44.515 [2024-12-05 13:50:26.980747] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:45.891 Initializing NVMe Controllers 00:18:45.891 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:45.891 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:45.891 Initialization complete. Launching workers. 00:18:45.891 submit (in ns) avg, min, max = 6731.0, 3142.9, 3999624.8 00:18:45.891 complete (in ns) avg, min, max = 20739.8, 1709.5, 3998161.9 00:18:45.891 00:18:45.891 Submit histogram 00:18:45.891 ================ 00:18:45.891 Range in us Cumulative Count 00:18:45.891 3.139 - 3.154: 0.0241% ( 4) 00:18:45.891 3.154 - 3.170: 0.0481% ( 4) 00:18:45.891 3.170 - 3.185: 0.0542% ( 1) 00:18:45.891 3.185 - 3.200: 0.2647% ( 35) 00:18:45.891 3.200 - 3.215: 1.7810% ( 252) 00:18:45.891 3.215 - 3.230: 6.4501% ( 776) 00:18:45.891 3.230 - 3.246: 12.2684% ( 967) 00:18:45.891 3.246 - 3.261: 18.2431% ( 993) 00:18:45.891 3.261 - 3.276: 24.8075% ( 1091) 00:18:45.891 3.276 - 3.291: 31.3237% ( 1083) 00:18:45.891 3.291 - 3.307: 37.1841% ( 974) 00:18:45.891 3.307 - 3.322: 43.6643% ( 1077) 00:18:45.891 3.322 - 3.337: 49.3682% ( 948) 00:18:45.891 3.337 - 3.352: 53.9350% ( 759) 00:18:45.891 3.352 - 3.368: 59.1456% ( 866) 00:18:45.891 3.368 - 3.383: 67.7918% ( 1437) 00:18:45.891 3.383 - 3.398: 73.6041% ( 966) 00:18:45.891 3.398 - 3.413: 78.2310% ( 769) 00:18:45.891 3.413 - 3.429: 82.4789% ( 706) 00:18:45.891 3.429 - 3.444: 84.7232% ( 373) 00:18:45.891 3.444 - 3.459: 86.3779% ( 275) 00:18:45.891 3.459 - 3.474: 87.2443% ( 144) 00:18:45.891 3.474 - 3.490: 87.6294% ( 64) 00:18:45.891 3.490 - 3.505: 88.1288% ( 83) 00:18:45.891 3.505 - 3.520: 88.6582% ( 88) 00:18:45.891 3.520 - 3.535: 89.5307% ( 145) 00:18:45.891 3.535 - 3.550: 90.4152% ( 147) 00:18:45.891 3.550 - 3.566: 91.4200% ( 167) 00:18:45.891 3.566 - 3.581: 92.3646% ( 157) 00:18:45.891 3.581 - 3.596: 93.3093% ( 157) 00:18:45.891 3.596 - 3.611: 94.3803% ( 178) 00:18:45.891 3.611 - 3.627: 95.3008% ( 153) 00:18:45.891 3.627 - 3.642: 96.3598% ( 176) 00:18:45.891 3.642 - 3.657: 97.0818% ( 120) 00:18:45.891 3.657 - 3.672: 97.7677% ( 114) 00:18:45.891 3.672 - 3.688: 98.2852% ( 86) 00:18:45.891 3.688 - 3.703: 98.6582% ( 62) 00:18:45.891 3.703 - 3.718: 98.9892% ( 55) 00:18:45.891 3.718 - 3.733: 99.2840% ( 49) 00:18:45.891 3.733 - 3.749: 99.4465% ( 27) 00:18:45.891 3.749 - 3.764: 99.5548% ( 18) 00:18:45.891 3.764 - 3.779: 99.6149% ( 10) 00:18:45.891 3.779 - 3.794: 99.6450% ( 5) 00:18:45.891 3.794 - 3.810: 99.6631% ( 3) 00:18:45.891 3.810 - 3.825: 99.6691% ( 1) 00:18:45.891 3.825 - 3.840: 99.6751% ( 1) 00:18:45.891 3.840 - 3.855: 99.6931% ( 3) 00:18:45.891 4.937 - 4.968: 99.6992% ( 1) 00:18:45.891 5.029 - 5.059: 99.7052% ( 1) 00:18:45.891 5.059 - 5.090: 99.7172% ( 2) 00:18:45.891 5.211 - 5.242: 99.7232% ( 1) 00:18:45.891 5.303 - 5.333: 99.7292% ( 1) 00:18:45.891 5.364 - 5.394: 99.7353% ( 1) 00:18:45.891 5.425 - 5.455: 99.7413% ( 1) 00:18:45.891 5.455 - 5.486: 99.7473% ( 1) 00:18:45.891 5.516 - 5.547: 99.7533% ( 1) 00:18:45.891 5.547 - 5.577: 99.7593% ( 1) 00:18:45.891 5.669 - 5.699: 99.7714% ( 2) 00:18:45.891 5.699 - 5.730: 99.7774% ( 1) 00:18:45.891 5.760 - 5.790: 99.7834% ( 1) 00:18:45.891 5.882 - 5.912: 99.7894% ( 1) 00:18:45.891 5.973 - 6.004: 99.7954% ( 1) 00:18:45.891 6.004 - 6.034: 99.8014% ( 1) 00:18:45.891 6.156 - 6.187: 99.8075% ( 1) 00:18:45.891 6.217 - 6.248: 99.8135% ( 1) 00:18:45.891 6.248 - 6.278: 99.8195% ( 1) 00:18:45.891 6.278 - 6.309: 99.8255% ( 1) 00:18:45.891 6.339 - 6.370: 99.8315% ( 1) 00:18:45.891 6.370 - 6.400: 99.8375% ( 1) 00:18:45.891 6.400 - 6.430: 99.8436% ( 1) 00:18:45.891 6.461 - 6.491: 99.8556% ( 2) 00:18:45.891 6.491 - 6.522: 99.8676% ( 2) 00:18:45.891 6.979 - 7.010: 99.8736% ( 1) 00:18:45.891 7.528 - 7.558: 99.8797% ( 1) 00:18:45.891 7.680 - 7.710: 99.8857% ( 1) 00:18:45.891 7.924 - 7.985: 99.8977% ( 2) 00:18:45.891 8.655 - 8.716: 99.9037% ( 1) 00:18:45.891 15.543 - 15.604: 99.9097% ( 1) 00:18:45.891 19.261 - 19.383: 99.9158% ( 1) 00:18:45.891 [2024-12-05 13:50:28.082357] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:45.891 3994.575 - 4025.783: 100.0000% ( 14) 00:18:45.891 00:18:45.891 Complete histogram 00:18:45.891 ================== 00:18:45.891 Range in us Cumulative Count 00:18:45.892 1.707 - 1.714: 0.0241% ( 4) 00:18:45.892 1.714 - 1.722: 0.0842% ( 10) 00:18:45.892 1.722 - 1.730: 0.1564% ( 12) 00:18:45.892 1.730 - 1.737: 0.2106% ( 9) 00:18:45.892 1.737 - 1.745: 0.2226% ( 2) 00:18:45.892 1.745 - 1.752: 0.2467% ( 4) 00:18:45.892 1.752 - 1.760: 0.7040% ( 76) 00:18:45.892 1.760 - 1.768: 7.0036% ( 1047) 00:18:45.892 1.768 - 1.775: 26.7449% ( 3281) 00:18:45.892 1.775 - 1.783: 48.0385% ( 3539) 00:18:45.892 1.783 - 1.790: 56.8051% ( 1457) 00:18:45.892 1.790 - 1.798: 59.7894% ( 496) 00:18:45.892 1.798 - 1.806: 62.1480% ( 392) 00:18:45.892 1.806 - 1.813: 64.3381% ( 364) 00:18:45.892 1.813 - 1.821: 70.7882% ( 1072) 00:18:45.892 1.821 - 1.829: 82.2082% ( 1898) 00:18:45.892 1.829 - 1.836: 91.4561% ( 1537) 00:18:45.892 1.836 - 1.844: 94.9639% ( 583) 00:18:45.892 1.844 - 1.851: 96.5945% ( 271) 00:18:45.892 1.851 - 1.859: 97.6534% ( 176) 00:18:45.892 1.859 - 1.867: 98.2972% ( 107) 00:18:45.892 1.867 - 1.874: 98.5740% ( 46) 00:18:45.892 1.874 - 1.882: 98.7124% ( 23) 00:18:45.892 1.882 - 1.890: 98.8568% ( 24) 00:18:45.892 1.890 - 1.897: 98.9471% ( 15) 00:18:45.892 1.897 - 1.905: 99.0734% ( 21) 00:18:45.892 1.905 - 1.912: 99.2058% ( 22) 00:18:45.892 1.912 - 1.920: 99.3081% ( 17) 00:18:45.892 1.920 - 1.928: 99.3261% ( 3) 00:18:45.892 1.928 - 1.935: 99.3502% ( 4) 00:18:45.892 1.935 - 1.943: 99.3682% ( 3) 00:18:45.892 1.981 - 1.996: 99.3742% ( 1) 00:18:45.892 1.996 - 2.011: 99.3803% ( 1) 00:18:45.892 3.474 - 3.490: 99.3863% ( 1) 00:18:45.892 3.535 - 3.550: 99.3923% ( 1) 00:18:45.892 3.688 - 3.703: 99.3983% ( 1) 00:18:45.892 3.764 - 3.779: 99.4043% ( 1) 00:18:45.892 3.794 - 3.810: 99.4164% ( 2) 00:18:45.892 3.855 - 3.870: 99.4224% ( 1) 00:18:45.892 3.870 - 3.886: 99.4284% ( 1) 00:18:45.892 3.992 - 4.023: 99.4404% ( 2) 00:18:45.892 4.023 - 4.053: 99.4465% ( 1) 00:18:45.892 4.419 - 4.450: 99.4525% ( 1) 00:18:45.892 4.510 - 4.541: 99.4585% ( 1) 00:18:45.892 4.663 - 4.693: 99.4645% ( 1) 00:18:45.892 4.907 - 4.937: 99.4705% ( 1) 00:18:45.892 4.937 - 4.968: 99.4765% ( 1) 00:18:45.892 4.998 - 5.029: 99.4826% ( 1) 00:18:45.892 5.333 - 5.364: 99.4946% ( 2) 00:18:45.892 5.760 - 5.790: 99.5006% ( 1) 00:18:45.892 6.126 - 6.156: 99.5066% ( 1) 00:18:45.892 6.339 - 6.370: 99.5126% ( 1) 00:18:45.892 7.192 - 7.223: 99.5247% ( 2) 00:18:45.892 3245.592 - 3261.196: 99.5307% ( 1) 00:18:45.892 3978.971 - 3994.575: 99.5427% ( 2) 00:18:45.892 3994.575 - 4025.783: 100.0000% ( 76) 00:18:45.892 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:45.892 [ 00:18:45.892 { 00:18:45.892 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:45.892 "subtype": "Discovery", 00:18:45.892 "listen_addresses": [], 00:18:45.892 "allow_any_host": true, 00:18:45.892 "hosts": [] 00:18:45.892 }, 00:18:45.892 { 00:18:45.892 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:45.892 "subtype": "NVMe", 00:18:45.892 "listen_addresses": [ 00:18:45.892 { 00:18:45.892 "trtype": "VFIOUSER", 00:18:45.892 "adrfam": "IPv4", 00:18:45.892 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:45.892 "trsvcid": "0" 00:18:45.892 } 00:18:45.892 ], 00:18:45.892 "allow_any_host": true, 00:18:45.892 "hosts": [], 00:18:45.892 "serial_number": "SPDK1", 00:18:45.892 "model_number": "SPDK bdev Controller", 00:18:45.892 "max_namespaces": 32, 00:18:45.892 "min_cntlid": 1, 00:18:45.892 "max_cntlid": 65519, 00:18:45.892 "namespaces": [ 00:18:45.892 { 00:18:45.892 "nsid": 1, 00:18:45.892 "bdev_name": "Malloc1", 00:18:45.892 "name": "Malloc1", 00:18:45.892 "nguid": "06DEDF8B237F47CEA4C34B8CA6683058", 00:18:45.892 "uuid": "06dedf8b-237f-47ce-a4c3-4b8ca6683058" 00:18:45.892 }, 00:18:45.892 { 00:18:45.892 "nsid": 2, 00:18:45.892 "bdev_name": "Malloc3", 00:18:45.892 "name": "Malloc3", 00:18:45.892 "nguid": "ADD92C430B7B4134A3C4EB6A2622F330", 00:18:45.892 "uuid": "add92c43-0b7b-4134-a3c4-eb6a2622f330" 00:18:45.892 } 00:18:45.892 ] 00:18:45.892 }, 00:18:45.892 { 00:18:45.892 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:45.892 "subtype": "NVMe", 00:18:45.892 "listen_addresses": [ 00:18:45.892 { 00:18:45.892 "trtype": "VFIOUSER", 00:18:45.892 "adrfam": "IPv4", 00:18:45.892 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:45.892 "trsvcid": "0" 00:18:45.892 } 00:18:45.892 ], 00:18:45.892 "allow_any_host": true, 00:18:45.892 "hosts": [], 00:18:45.892 "serial_number": "SPDK2", 00:18:45.892 "model_number": "SPDK bdev Controller", 00:18:45.892 "max_namespaces": 32, 00:18:45.892 "min_cntlid": 1, 00:18:45.892 "max_cntlid": 65519, 00:18:45.892 "namespaces": [ 00:18:45.892 { 00:18:45.892 "nsid": 1, 00:18:45.892 "bdev_name": "Malloc2", 00:18:45.892 "name": "Malloc2", 00:18:45.892 "nguid": "13215D5101FE4B55A9EA4771104AB52D", 00:18:45.892 "uuid": "13215d51-01fe-4b55-a9ea-4771104ab52d" 00:18:45.892 } 00:18:45.892 ] 00:18:45.892 } 00:18:45.892 ] 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=637067 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:45.892 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:46.151 [2024-12-05 13:50:28.488757] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:46.151 Malloc4 00:18:46.151 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:46.151 [2024-12-05 13:50:28.723533] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:46.409 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:46.409 Asynchronous Event Request test 00:18:46.409 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:46.409 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:46.409 Registering asynchronous event callbacks... 00:18:46.409 Starting namespace attribute notice tests for all controllers... 00:18:46.409 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:46.409 aer_cb - Changed Namespace 00:18:46.409 Cleaning up... 00:18:46.409 [ 00:18:46.409 { 00:18:46.409 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:46.409 "subtype": "Discovery", 00:18:46.409 "listen_addresses": [], 00:18:46.409 "allow_any_host": true, 00:18:46.409 "hosts": [] 00:18:46.409 }, 00:18:46.409 { 00:18:46.409 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:46.409 "subtype": "NVMe", 00:18:46.409 "listen_addresses": [ 00:18:46.409 { 00:18:46.409 "trtype": "VFIOUSER", 00:18:46.409 "adrfam": "IPv4", 00:18:46.409 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:46.409 "trsvcid": "0" 00:18:46.409 } 00:18:46.409 ], 00:18:46.409 "allow_any_host": true, 00:18:46.409 "hosts": [], 00:18:46.409 "serial_number": "SPDK1", 00:18:46.409 "model_number": "SPDK bdev Controller", 00:18:46.409 "max_namespaces": 32, 00:18:46.409 "min_cntlid": 1, 00:18:46.409 "max_cntlid": 65519, 00:18:46.409 "namespaces": [ 00:18:46.409 { 00:18:46.409 "nsid": 1, 00:18:46.409 "bdev_name": "Malloc1", 00:18:46.409 "name": "Malloc1", 00:18:46.409 "nguid": "06DEDF8B237F47CEA4C34B8CA6683058", 00:18:46.409 "uuid": "06dedf8b-237f-47ce-a4c3-4b8ca6683058" 00:18:46.409 }, 00:18:46.409 { 00:18:46.409 "nsid": 2, 00:18:46.409 "bdev_name": "Malloc3", 00:18:46.409 "name": "Malloc3", 00:18:46.409 "nguid": "ADD92C430B7B4134A3C4EB6A2622F330", 00:18:46.409 "uuid": "add92c43-0b7b-4134-a3c4-eb6a2622f330" 00:18:46.409 } 00:18:46.409 ] 00:18:46.409 }, 00:18:46.409 { 00:18:46.409 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:46.409 "subtype": "NVMe", 00:18:46.409 "listen_addresses": [ 00:18:46.409 { 00:18:46.409 "trtype": "VFIOUSER", 00:18:46.409 "adrfam": "IPv4", 00:18:46.409 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:46.409 "trsvcid": "0" 00:18:46.409 } 00:18:46.409 ], 00:18:46.409 "allow_any_host": true, 00:18:46.409 "hosts": [], 00:18:46.409 "serial_number": "SPDK2", 00:18:46.409 "model_number": "SPDK bdev Controller", 00:18:46.409 "max_namespaces": 32, 00:18:46.409 "min_cntlid": 1, 00:18:46.409 "max_cntlid": 65519, 00:18:46.409 "namespaces": [ 00:18:46.409 { 00:18:46.409 "nsid": 1, 00:18:46.409 "bdev_name": "Malloc2", 00:18:46.409 "name": "Malloc2", 00:18:46.409 "nguid": "13215D5101FE4B55A9EA4771104AB52D", 00:18:46.409 "uuid": "13215d51-01fe-4b55-a9ea-4771104ab52d" 00:18:46.409 }, 00:18:46.409 { 00:18:46.409 "nsid": 2, 00:18:46.409 "bdev_name": "Malloc4", 00:18:46.409 "name": "Malloc4", 00:18:46.409 "nguid": "D0B36451B8774A6CBE572837C58DDC14", 00:18:46.409 "uuid": "d0b36451-b877-4a6c-be57-2837c58ddc14" 00:18:46.409 } 00:18:46.409 ] 00:18:46.409 } 00:18:46.409 ] 00:18:46.409 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 637067 00:18:46.409 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:46.409 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 629368 00:18:46.409 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 629368 ']' 00:18:46.409 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 629368 00:18:46.409 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:46.409 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.409 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 629368 00:18:46.681 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 629368' 00:18:46.682 killing process with pid 629368 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 629368 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 629368 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=637214 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 637214' 00:18:46.682 Process pid: 637214 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 637214 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 637214 ']' 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.682 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.683 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:46.949 [2024-12-05 13:50:29.286525] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:46.949 [2024-12-05 13:50:29.287382] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:46.949 [2024-12-05 13:50:29.287437] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.949 [2024-12-05 13:50:29.360874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:46.949 [2024-12-05 13:50:29.397586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.949 [2024-12-05 13:50:29.397625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.949 [2024-12-05 13:50:29.397632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.949 [2024-12-05 13:50:29.397639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.949 [2024-12-05 13:50:29.397645] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.949 [2024-12-05 13:50:29.399096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.949 [2024-12-05 13:50:29.399206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.949 [2024-12-05 13:50:29.399290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.949 [2024-12-05 13:50:29.399290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:46.950 [2024-12-05 13:50:29.468049] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:46.950 [2024-12-05 13:50:29.468560] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:46.950 [2024-12-05 13:50:29.468890] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:46.950 [2024-12-05 13:50:29.469083] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:46.950 [2024-12-05 13:50:29.469134] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:46.950 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.950 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:46.950 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:48.328 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:48.328 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:48.328 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:48.328 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:48.328 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:48.328 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:48.328 Malloc1 00:18:48.630 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:48.630 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:48.921 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:49.210 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:49.210 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:49.210 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:49.210 Malloc2 00:18:49.210 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:49.468 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:49.726 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:49.985 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:49.985 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 637214 00:18:49.985 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 637214 ']' 00:18:49.985 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 637214 00:18:49.985 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:49.985 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.985 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 637214 00:18:49.985 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.985 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.985 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 637214' 00:18:49.985 killing process with pid 637214 00:18:49.985 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 637214 00:18:49.985 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 637214 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:50.245 00:18:50.245 real 0m50.810s 00:18:50.245 user 3m16.546s 00:18:50.245 sys 0m3.171s 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:50.245 ************************************ 00:18:50.245 END TEST nvmf_vfio_user 00:18:50.245 ************************************ 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:50.245 ************************************ 00:18:50.245 START TEST nvmf_vfio_user_nvme_compliance 00:18:50.245 ************************************ 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:50.245 * Looking for test storage... 00:18:50.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:50.245 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.505 --rc genhtml_branch_coverage=1 00:18:50.505 --rc genhtml_function_coverage=1 00:18:50.505 --rc genhtml_legend=1 00:18:50.505 --rc geninfo_all_blocks=1 00:18:50.505 --rc geninfo_unexecuted_blocks=1 00:18:50.505 00:18:50.505 ' 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.505 --rc genhtml_branch_coverage=1 00:18:50.505 --rc genhtml_function_coverage=1 00:18:50.505 --rc genhtml_legend=1 00:18:50.505 --rc geninfo_all_blocks=1 00:18:50.505 --rc geninfo_unexecuted_blocks=1 00:18:50.505 00:18:50.505 ' 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.505 --rc genhtml_branch_coverage=1 00:18:50.505 --rc genhtml_function_coverage=1 00:18:50.505 --rc genhtml_legend=1 00:18:50.505 --rc geninfo_all_blocks=1 00:18:50.505 --rc geninfo_unexecuted_blocks=1 00:18:50.505 00:18:50.505 ' 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.505 --rc genhtml_branch_coverage=1 00:18:50.505 --rc genhtml_function_coverage=1 00:18:50.505 --rc genhtml_legend=1 00:18:50.505 --rc geninfo_all_blocks=1 00:18:50.505 --rc geninfo_unexecuted_blocks=1 00:18:50.505 00:18:50.505 ' 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.505 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:50.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=637975 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 637975' 00:18:50.506 Process pid: 637975 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 637975 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 637975 ']' 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.506 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:50.506 [2024-12-05 13:50:32.916434] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:50.506 [2024-12-05 13:50:32.916481] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.506 [2024-12-05 13:50:32.992377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:50.506 [2024-12-05 13:50:33.033563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.506 [2024-12-05 13:50:33.033600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.506 [2024-12-05 13:50:33.033607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.506 [2024-12-05 13:50:33.033617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.506 [2024-12-05 13:50:33.033622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.506 [2024-12-05 13:50:33.035030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.506 [2024-12-05 13:50:33.035153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.506 [2024-12-05 13:50:33.035154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.764 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.764 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:50.764 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:51.699 malloc0 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.699 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:51.958 00:18:51.958 00:18:51.958 CUnit - A unit testing framework for C - Version 2.1-3 00:18:51.958 http://cunit.sourceforge.net/ 00:18:51.958 00:18:51.958 00:18:51.958 Suite: nvme_compliance 00:18:51.958 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-05 13:50:34.372874] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:51.958 [2024-12-05 13:50:34.374203] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:51.958 [2024-12-05 13:50:34.374218] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:51.958 [2024-12-05 13:50:34.374225] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:51.958 [2024-12-05 13:50:34.377899] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:51.958 passed 00:18:51.958 Test: admin_identify_ctrlr_verify_fused ...[2024-12-05 13:50:34.454405] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:51.958 [2024-12-05 13:50:34.457425] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:51.958 passed 00:18:51.958 Test: admin_identify_ns ...[2024-12-05 13:50:34.536666] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.216 [2024-12-05 13:50:34.591378] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:52.216 [2024-12-05 13:50:34.599384] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:52.216 [2024-12-05 13:50:34.620477] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.216 passed 00:18:52.216 Test: admin_get_features_mandatory_features ...[2024-12-05 13:50:34.697288] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.216 [2024-12-05 13:50:34.700311] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.216 passed 00:18:52.216 Test: admin_get_features_optional_features ...[2024-12-05 13:50:34.775804] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.216 [2024-12-05 13:50:34.779824] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.474 passed 00:18:52.474 Test: admin_set_features_number_of_queues ...[2024-12-05 13:50:34.858652] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.474 [2024-12-05 13:50:34.964454] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.474 passed 00:18:52.474 Test: admin_get_log_page_mandatory_logs ...[2024-12-05 13:50:35.040280] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.474 [2024-12-05 13:50:35.043304] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.731 passed 00:18:52.731 Test: admin_get_log_page_with_lpo ...[2024-12-05 13:50:35.117922] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.731 [2024-12-05 13:50:35.189378] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:52.731 [2024-12-05 13:50:35.202440] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.731 passed 00:18:52.731 Test: fabric_property_get ...[2024-12-05 13:50:35.278219] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.731 [2024-12-05 13:50:35.279459] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:52.731 [2024-12-05 13:50:35.281234] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.731 passed 00:18:52.988 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-05 13:50:35.357734] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.988 [2024-12-05 13:50:35.358962] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:52.988 [2024-12-05 13:50:35.360751] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.988 passed 00:18:52.988 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-05 13:50:35.436798] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.988 [2024-12-05 13:50:35.524380] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:52.988 [2024-12-05 13:50:35.540380] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:52.988 [2024-12-05 13:50:35.545461] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.988 passed 00:18:53.246 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-05 13:50:35.622402] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.246 [2024-12-05 13:50:35.623636] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:53.246 [2024-12-05 13:50:35.625427] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.246 passed 00:18:53.246 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-05 13:50:35.700720] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.246 [2024-12-05 13:50:35.776376] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:53.246 [2024-12-05 13:50:35.800374] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:53.246 [2024-12-05 13:50:35.805462] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.504 passed 00:18:53.504 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-05 13:50:35.882216] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.504 [2024-12-05 13:50:35.883454] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:53.504 [2024-12-05 13:50:35.883478] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:53.504 [2024-12-05 13:50:35.885229] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.504 passed 00:18:53.504 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-05 13:50:35.961891] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.504 [2024-12-05 13:50:36.054378] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:53.504 [2024-12-05 13:50:36.062375] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:53.504 [2024-12-05 13:50:36.070379] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:53.504 [2024-12-05 13:50:36.078372] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:53.763 [2024-12-05 13:50:36.107460] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.763 passed 00:18:53.763 Test: admin_create_io_sq_verify_pc ...[2024-12-05 13:50:36.183181] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.763 [2024-12-05 13:50:36.199380] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:53.763 [2024-12-05 13:50:36.217353] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.763 passed 00:18:53.763 Test: admin_create_io_qp_max_qps ...[2024-12-05 13:50:36.295883] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.137 [2024-12-05 13:50:37.397378] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:55.395 [2024-12-05 13:50:37.791023] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.395 passed 00:18:55.395 Test: admin_create_io_sq_shared_cq ...[2024-12-05 13:50:37.867944] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.653 [2024-12-05 13:50:38.000379] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:55.653 [2024-12-05 13:50:38.037442] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.653 passed 00:18:55.653 00:18:55.653 Run Summary: Type Total Ran Passed Failed Inactive 00:18:55.653 suites 1 1 n/a 0 0 00:18:55.653 tests 18 18 18 0 0 00:18:55.653 asserts 360 360 360 0 n/a 00:18:55.653 00:18:55.653 Elapsed time = 1.501 seconds 00:18:55.653 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 637975 00:18:55.653 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 637975 ']' 00:18:55.653 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 637975 00:18:55.653 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:55.653 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.653 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 637975 00:18:55.653 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.653 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.653 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 637975' 00:18:55.653 killing process with pid 637975 00:18:55.653 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 637975 00:18:55.653 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 637975 00:18:55.912 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:55.912 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:55.912 00:18:55.912 real 0m5.662s 00:18:55.912 user 0m15.807s 00:18:55.912 sys 0m0.546s 00:18:55.912 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.912 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:55.912 ************************************ 00:18:55.912 END TEST nvmf_vfio_user_nvme_compliance 00:18:55.912 ************************************ 00:18:55.912 13:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:55.912 13:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:55.912 13:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.912 13:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.912 ************************************ 00:18:55.912 START TEST nvmf_vfio_user_fuzz 00:18:55.912 ************************************ 00:18:55.912 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:55.912 * Looking for test storage... 00:18:55.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:55.912 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:55.912 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:18:55.912 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:56.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.171 --rc genhtml_branch_coverage=1 00:18:56.171 --rc genhtml_function_coverage=1 00:18:56.171 --rc genhtml_legend=1 00:18:56.171 --rc geninfo_all_blocks=1 00:18:56.171 --rc geninfo_unexecuted_blocks=1 00:18:56.171 00:18:56.171 ' 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:56.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.171 --rc genhtml_branch_coverage=1 00:18:56.171 --rc genhtml_function_coverage=1 00:18:56.171 --rc genhtml_legend=1 00:18:56.171 --rc geninfo_all_blocks=1 00:18:56.171 --rc geninfo_unexecuted_blocks=1 00:18:56.171 00:18:56.171 ' 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:56.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.171 --rc genhtml_branch_coverage=1 00:18:56.171 --rc genhtml_function_coverage=1 00:18:56.171 --rc genhtml_legend=1 00:18:56.171 --rc geninfo_all_blocks=1 00:18:56.171 --rc geninfo_unexecuted_blocks=1 00:18:56.171 00:18:56.171 ' 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:56.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.171 --rc genhtml_branch_coverage=1 00:18:56.171 --rc genhtml_function_coverage=1 00:18:56.171 --rc genhtml_legend=1 00:18:56.171 --rc geninfo_all_blocks=1 00:18:56.171 --rc geninfo_unexecuted_blocks=1 00:18:56.171 00:18:56.171 ' 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.171 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:56.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=638965 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 638965' 00:18:56.172 Process pid: 638965 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 638965 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 638965 ']' 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.172 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:56.430 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.430 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:56.430 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:57.382 malloc0 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:57.382 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:29.463 Fuzzing completed. Shutting down the fuzz application 00:19:29.463 00:19:29.463 Dumping successful admin opcodes: 00:19:29.463 9, 10, 00:19:29.463 Dumping successful io opcodes: 00:19:29.463 0, 00:19:29.463 NS: 0x20000081ef00 I/O qp, Total commands completed: 1120894, total successful commands: 4411, random_seed: 1281209792 00:19:29.463 NS: 0x20000081ef00 admin qp, Total commands completed: 274800, total successful commands: 64, random_seed: 3598268736 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 638965 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 638965 ']' 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 638965 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 638965 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 638965' 00:19:29.463 killing process with pid 638965 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 638965 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 638965 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:29.463 00:19:29.463 real 0m32.243s 00:19:29.463 user 0m33.669s 00:19:29.463 sys 0m27.159s 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:29.463 ************************************ 00:19:29.463 END TEST nvmf_vfio_user_fuzz 00:19:29.463 ************************************ 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:29.463 ************************************ 00:19:29.463 START TEST nvmf_auth_target 00:19:29.463 ************************************ 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:29.463 * Looking for test storage... 00:19:29.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.463 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:29.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.464 --rc genhtml_branch_coverage=1 00:19:29.464 --rc genhtml_function_coverage=1 00:19:29.464 --rc genhtml_legend=1 00:19:29.464 --rc geninfo_all_blocks=1 00:19:29.464 --rc geninfo_unexecuted_blocks=1 00:19:29.464 00:19:29.464 ' 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:29.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.464 --rc genhtml_branch_coverage=1 00:19:29.464 --rc genhtml_function_coverage=1 00:19:29.464 --rc genhtml_legend=1 00:19:29.464 --rc geninfo_all_blocks=1 00:19:29.464 --rc geninfo_unexecuted_blocks=1 00:19:29.464 00:19:29.464 ' 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:29.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.464 --rc genhtml_branch_coverage=1 00:19:29.464 --rc genhtml_function_coverage=1 00:19:29.464 --rc genhtml_legend=1 00:19:29.464 --rc geninfo_all_blocks=1 00:19:29.464 --rc geninfo_unexecuted_blocks=1 00:19:29.464 00:19:29.464 ' 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:29.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.464 --rc genhtml_branch_coverage=1 00:19:29.464 --rc genhtml_function_coverage=1 00:19:29.464 --rc genhtml_legend=1 00:19:29.464 --rc geninfo_all_blocks=1 00:19:29.464 --rc geninfo_unexecuted_blocks=1 00:19:29.464 00:19:29.464 ' 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:29.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:29.464 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:34.734 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:34.734 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:34.734 Found net devices under 0000:86:00.0: cvl_0_0 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:34.734 Found net devices under 0000:86:00.1: cvl_0_1 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:34.734 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:34.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:19:34.735 00:19:34.735 --- 10.0.0.2 ping statistics --- 00:19:34.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.735 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:34.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:19:34.735 00:19:34.735 --- 10.0.0.1 ping statistics --- 00:19:34.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.735 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=647778 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 647778 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 647778 ']' 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.735 13:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=647803 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1a189ee35e1cc0fffb8a2903cd8e6fcf9efdb9f885e0e490 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uTe 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1a189ee35e1cc0fffb8a2903cd8e6fcf9efdb9f885e0e490 0 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1a189ee35e1cc0fffb8a2903cd8e6fcf9efdb9f885e0e490 0 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1a189ee35e1cc0fffb8a2903cd8e6fcf9efdb9f885e0e490 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uTe 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uTe 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.uTe 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e50dbb3fc05d0f7271ab603d3a4841a9af92c218f0b7f6150ca61eb975ca7893 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.LuM 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e50dbb3fc05d0f7271ab603d3a4841a9af92c218f0b7f6150ca61eb975ca7893 3 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e50dbb3fc05d0f7271ab603d3a4841a9af92c218f0b7f6150ca61eb975ca7893 3 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e50dbb3fc05d0f7271ab603d3a4841a9af92c218f0b7f6150ca61eb975ca7893 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.LuM 00:19:34.735 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.LuM 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.LuM 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=51c1c2f2049028ab6f788ee5b2c37610 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7Ak 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 51c1c2f2049028ab6f788ee5b2c37610 1 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 51c1c2f2049028ab6f788ee5b2c37610 1 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=51c1c2f2049028ab6f788ee5b2c37610 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:34.736 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7Ak 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7Ak 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.7Ak 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e3c0554fb80a018ec614d99130c1bd4703ecd277d0fd722b 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xTD 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e3c0554fb80a018ec614d99130c1bd4703ecd277d0fd722b 2 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e3c0554fb80a018ec614d99130c1bd4703ecd277d0fd722b 2 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e3c0554fb80a018ec614d99130c1bd4703ecd277d0fd722b 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xTD 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xTD 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.xTD 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b2fe1dbba54dbe8ce434bdfd35f19dd9ae436cad6cb7a5a7 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.FHs 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b2fe1dbba54dbe8ce434bdfd35f19dd9ae436cad6cb7a5a7 2 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b2fe1dbba54dbe8ce434bdfd35f19dd9ae436cad6cb7a5a7 2 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b2fe1dbba54dbe8ce434bdfd35f19dd9ae436cad6cb7a5a7 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.FHs 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.FHs 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.FHs 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9b309d1e4530ee17385adf5b175573f5 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jD1 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9b309d1e4530ee17385adf5b175573f5 1 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9b309d1e4530ee17385adf5b175573f5 1 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9b309d1e4530ee17385adf5b175573f5 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jD1 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jD1 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.jD1 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2d48963073ba11cfef3a80b457bc7ac7bfbcdfd58ea9ef7c5b6bb7a03be1a5f4 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.XEq 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2d48963073ba11cfef3a80b457bc7ac7bfbcdfd58ea9ef7c5b6bb7a03be1a5f4 3 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2d48963073ba11cfef3a80b457bc7ac7bfbcdfd58ea9ef7c5b6bb7a03be1a5f4 3 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2d48963073ba11cfef3a80b457bc7ac7bfbcdfd58ea9ef7c5b6bb7a03be1a5f4 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:34.996 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.XEq 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.XEq 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.XEq 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 647778 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 647778 ']' 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 647803 /var/tmp/host.sock 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 647803 ']' 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:35.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.255 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.514 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.514 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:35.514 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:35.514 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.514 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.514 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.514 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:35.514 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uTe 00:19:35.514 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.514 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.514 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.514 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.uTe 00:19:35.514 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.uTe 00:19:35.773 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.LuM ]] 00:19:35.773 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LuM 00:19:35.773 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.773 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.773 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.773 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LuM 00:19:35.773 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LuM 00:19:36.032 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:36.032 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.7Ak 00:19:36.032 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.032 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.032 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.032 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.7Ak 00:19:36.032 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.7Ak 00:19:36.033 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.xTD ]] 00:19:36.033 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xTD 00:19:36.033 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.033 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.033 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.033 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xTD 00:19:36.033 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xTD 00:19:36.291 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:36.291 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FHs 00:19:36.291 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.291 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.291 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.291 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.FHs 00:19:36.291 13:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.FHs 00:19:36.550 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.jD1 ]] 00:19:36.550 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jD1 00:19:36.550 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.550 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.550 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.550 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jD1 00:19:36.550 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jD1 00:19:36.810 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:36.810 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.XEq 00:19:36.810 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.810 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.810 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.810 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.XEq 00:19:36.810 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.XEq 00:19:36.810 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:36.810 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:36.810 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.810 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.810 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.069 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.328 00:19:37.328 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.328 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.328 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.586 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.586 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.586 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.586 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.586 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.586 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.586 { 00:19:37.586 "cntlid": 1, 00:19:37.586 "qid": 0, 00:19:37.586 "state": "enabled", 00:19:37.587 "thread": "nvmf_tgt_poll_group_000", 00:19:37.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:37.587 "listen_address": { 00:19:37.587 "trtype": "TCP", 00:19:37.587 "adrfam": "IPv4", 00:19:37.587 "traddr": "10.0.0.2", 00:19:37.587 "trsvcid": "4420" 00:19:37.587 }, 00:19:37.587 "peer_address": { 00:19:37.587 "trtype": "TCP", 00:19:37.587 "adrfam": "IPv4", 00:19:37.587 "traddr": "10.0.0.1", 00:19:37.587 "trsvcid": "52068" 00:19:37.587 }, 00:19:37.587 "auth": { 00:19:37.587 "state": "completed", 00:19:37.587 "digest": "sha256", 00:19:37.587 "dhgroup": "null" 00:19:37.587 } 00:19:37.587 } 00:19:37.587 ]' 00:19:37.587 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.587 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.587 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.587 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:37.587 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.846 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.846 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.846 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.846 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:19:37.846 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:19:38.413 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.413 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:38.413 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.413 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.413 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.413 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.413 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.413 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.673 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:38.673 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.673 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.673 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:38.673 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:38.673 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.673 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.673 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.673 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.673 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.673 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.673 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.673 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.932 00:19:38.932 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.932 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.932 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.191 { 00:19:39.191 "cntlid": 3, 00:19:39.191 "qid": 0, 00:19:39.191 "state": "enabled", 00:19:39.191 "thread": "nvmf_tgt_poll_group_000", 00:19:39.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:39.191 "listen_address": { 00:19:39.191 "trtype": "TCP", 00:19:39.191 "adrfam": "IPv4", 00:19:39.191 "traddr": "10.0.0.2", 00:19:39.191 "trsvcid": "4420" 00:19:39.191 }, 00:19:39.191 "peer_address": { 00:19:39.191 "trtype": "TCP", 00:19:39.191 "adrfam": "IPv4", 00:19:39.191 "traddr": "10.0.0.1", 00:19:39.191 "trsvcid": "48738" 00:19:39.191 }, 00:19:39.191 "auth": { 00:19:39.191 "state": "completed", 00:19:39.191 "digest": "sha256", 00:19:39.191 "dhgroup": "null" 00:19:39.191 } 00:19:39.191 } 00:19:39.191 ]' 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.191 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.450 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:19:39.450 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:19:40.018 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.018 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:40.018 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.018 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.018 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.018 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.018 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:40.018 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:40.277 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:40.277 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.277 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.277 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:40.277 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:40.277 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.277 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.277 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.277 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.277 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.277 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.277 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.277 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.536 00:19:40.536 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.536 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.536 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.795 { 00:19:40.795 "cntlid": 5, 00:19:40.795 "qid": 0, 00:19:40.795 "state": "enabled", 00:19:40.795 "thread": "nvmf_tgt_poll_group_000", 00:19:40.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:40.795 "listen_address": { 00:19:40.795 "trtype": "TCP", 00:19:40.795 "adrfam": "IPv4", 00:19:40.795 "traddr": "10.0.0.2", 00:19:40.795 "trsvcid": "4420" 00:19:40.795 }, 00:19:40.795 "peer_address": { 00:19:40.795 "trtype": "TCP", 00:19:40.795 "adrfam": "IPv4", 00:19:40.795 "traddr": "10.0.0.1", 00:19:40.795 "trsvcid": "48766" 00:19:40.795 }, 00:19:40.795 "auth": { 00:19:40.795 "state": "completed", 00:19:40.795 "digest": "sha256", 00:19:40.795 "dhgroup": "null" 00:19:40.795 } 00:19:40.795 } 00:19:40.795 ]' 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.795 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.054 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:19:41.054 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:19:41.622 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.622 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:41.622 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.622 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.622 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.622 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.622 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.622 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.881 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:41.881 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.881 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.881 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:41.881 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:41.881 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.881 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:41.881 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.881 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.881 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.881 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:41.881 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.881 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.139 00:19:42.139 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.139 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.139 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.140 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.140 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.409 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.409 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.409 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.409 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.409 { 00:19:42.409 "cntlid": 7, 00:19:42.409 "qid": 0, 00:19:42.409 "state": "enabled", 00:19:42.409 "thread": "nvmf_tgt_poll_group_000", 00:19:42.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:42.409 "listen_address": { 00:19:42.409 "trtype": "TCP", 00:19:42.409 "adrfam": "IPv4", 00:19:42.409 "traddr": "10.0.0.2", 00:19:42.409 "trsvcid": "4420" 00:19:42.409 }, 00:19:42.409 "peer_address": { 00:19:42.409 "trtype": "TCP", 00:19:42.410 "adrfam": "IPv4", 00:19:42.410 "traddr": "10.0.0.1", 00:19:42.410 "trsvcid": "48800" 00:19:42.410 }, 00:19:42.410 "auth": { 00:19:42.410 "state": "completed", 00:19:42.410 "digest": "sha256", 00:19:42.410 "dhgroup": "null" 00:19:42.410 } 00:19:42.410 } 00:19:42.410 ]' 00:19:42.410 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.410 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.410 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.410 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:42.410 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.410 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.410 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.410 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.667 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:19:42.667 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:19:43.232 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.232 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:43.232 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.232 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.232 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.232 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.232 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.232 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.232 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.489 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:43.489 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.489 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.489 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:43.489 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:43.489 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.489 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.489 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.489 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.489 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.489 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.489 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.489 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.748 00:19:43.748 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.748 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.748 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.748 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.748 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.748 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.748 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.748 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.748 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.748 { 00:19:43.748 "cntlid": 9, 00:19:43.748 "qid": 0, 00:19:43.748 "state": "enabled", 00:19:43.748 "thread": "nvmf_tgt_poll_group_000", 00:19:43.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:43.748 "listen_address": { 00:19:43.748 "trtype": "TCP", 00:19:43.748 "adrfam": "IPv4", 00:19:43.748 "traddr": "10.0.0.2", 00:19:43.748 "trsvcid": "4420" 00:19:43.748 }, 00:19:43.748 "peer_address": { 00:19:43.748 "trtype": "TCP", 00:19:43.748 "adrfam": "IPv4", 00:19:43.748 "traddr": "10.0.0.1", 00:19:43.748 "trsvcid": "48824" 00:19:43.748 }, 00:19:43.748 "auth": { 00:19:43.748 "state": "completed", 00:19:43.748 "digest": "sha256", 00:19:43.748 "dhgroup": "ffdhe2048" 00:19:43.748 } 00:19:43.748 } 00:19:43.748 ]' 00:19:43.748 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.006 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.006 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.006 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:44.006 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.006 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.006 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.006 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.264 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:19:44.264 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.831 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.832 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.832 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.832 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.832 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.832 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.832 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.832 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.110 00:19:45.110 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.110 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.110 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.369 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.369 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.369 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.369 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.369 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.369 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.369 { 00:19:45.369 "cntlid": 11, 00:19:45.369 "qid": 0, 00:19:45.369 "state": "enabled", 00:19:45.369 "thread": "nvmf_tgt_poll_group_000", 00:19:45.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:45.369 "listen_address": { 00:19:45.369 "trtype": "TCP", 00:19:45.369 "adrfam": "IPv4", 00:19:45.369 "traddr": "10.0.0.2", 00:19:45.369 "trsvcid": "4420" 00:19:45.369 }, 00:19:45.369 "peer_address": { 00:19:45.369 "trtype": "TCP", 00:19:45.369 "adrfam": "IPv4", 00:19:45.369 "traddr": "10.0.0.1", 00:19:45.369 "trsvcid": "48850" 00:19:45.369 }, 00:19:45.369 "auth": { 00:19:45.369 "state": "completed", 00:19:45.369 "digest": "sha256", 00:19:45.369 "dhgroup": "ffdhe2048" 00:19:45.369 } 00:19:45.369 } 00:19:45.369 ]' 00:19:45.369 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.369 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.369 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.627 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.627 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.627 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.627 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.627 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.627 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:19:45.627 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:19:46.193 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.452 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.709 00:19:46.709 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.709 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.709 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.967 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.967 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.967 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.967 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.967 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.967 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.967 { 00:19:46.967 "cntlid": 13, 00:19:46.967 "qid": 0, 00:19:46.967 "state": "enabled", 00:19:46.967 "thread": "nvmf_tgt_poll_group_000", 00:19:46.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:46.967 "listen_address": { 00:19:46.967 "trtype": "TCP", 00:19:46.967 "adrfam": "IPv4", 00:19:46.967 "traddr": "10.0.0.2", 00:19:46.967 "trsvcid": "4420" 00:19:46.967 }, 00:19:46.967 "peer_address": { 00:19:46.967 "trtype": "TCP", 00:19:46.967 "adrfam": "IPv4", 00:19:46.967 "traddr": "10.0.0.1", 00:19:46.967 "trsvcid": "48888" 00:19:46.967 }, 00:19:46.967 "auth": { 00:19:46.967 "state": "completed", 00:19:46.967 "digest": "sha256", 00:19:46.967 "dhgroup": "ffdhe2048" 00:19:46.967 } 00:19:46.967 } 00:19:46.967 ]' 00:19:46.967 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.967 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.967 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.225 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:47.225 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.225 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.225 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.225 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.225 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:19:47.225 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:19:47.792 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.792 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:47.792 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.792 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.792 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.792 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.792 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.792 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.050 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:48.050 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.050 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.050 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:48.050 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:48.050 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.050 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:48.050 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.050 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.050 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.050 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:48.050 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.050 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.309 00:19:48.309 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.309 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.309 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.568 { 00:19:48.568 "cntlid": 15, 00:19:48.568 "qid": 0, 00:19:48.568 "state": "enabled", 00:19:48.568 "thread": "nvmf_tgt_poll_group_000", 00:19:48.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:48.568 "listen_address": { 00:19:48.568 "trtype": "TCP", 00:19:48.568 "adrfam": "IPv4", 00:19:48.568 "traddr": "10.0.0.2", 00:19:48.568 "trsvcid": "4420" 00:19:48.568 }, 00:19:48.568 "peer_address": { 00:19:48.568 "trtype": "TCP", 00:19:48.568 "adrfam": "IPv4", 00:19:48.568 "traddr": "10.0.0.1", 00:19:48.568 "trsvcid": "34668" 00:19:48.568 }, 00:19:48.568 "auth": { 00:19:48.568 "state": "completed", 00:19:48.568 "digest": "sha256", 00:19:48.568 "dhgroup": "ffdhe2048" 00:19:48.568 } 00:19:48.568 } 00:19:48.568 ]' 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.568 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.827 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:19:48.827 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:19:49.394 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.394 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:49.394 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.394 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.394 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.394 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.394 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.394 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.394 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.653 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:49.653 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.653 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.653 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:49.653 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:49.653 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.653 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.653 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.653 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.653 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.653 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.653 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.653 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.911 00:19:49.911 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.911 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.911 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.171 { 00:19:50.171 "cntlid": 17, 00:19:50.171 "qid": 0, 00:19:50.171 "state": "enabled", 00:19:50.171 "thread": "nvmf_tgt_poll_group_000", 00:19:50.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:50.171 "listen_address": { 00:19:50.171 "trtype": "TCP", 00:19:50.171 "adrfam": "IPv4", 00:19:50.171 "traddr": "10.0.0.2", 00:19:50.171 "trsvcid": "4420" 00:19:50.171 }, 00:19:50.171 "peer_address": { 00:19:50.171 "trtype": "TCP", 00:19:50.171 "adrfam": "IPv4", 00:19:50.171 "traddr": "10.0.0.1", 00:19:50.171 "trsvcid": "34700" 00:19:50.171 }, 00:19:50.171 "auth": { 00:19:50.171 "state": "completed", 00:19:50.171 "digest": "sha256", 00:19:50.171 "dhgroup": "ffdhe3072" 00:19:50.171 } 00:19:50.171 } 00:19:50.171 ]' 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.171 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.430 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:19:50.430 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:19:50.997 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.997 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:50.997 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.997 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.997 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.997 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.997 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.997 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.255 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:51.255 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.255 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.255 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:51.255 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:51.255 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.255 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.255 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.255 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.255 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.255 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.255 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.255 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.514 00:19:51.514 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.514 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.514 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.772 { 00:19:51.772 "cntlid": 19, 00:19:51.772 "qid": 0, 00:19:51.772 "state": "enabled", 00:19:51.772 "thread": "nvmf_tgt_poll_group_000", 00:19:51.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:51.772 "listen_address": { 00:19:51.772 "trtype": "TCP", 00:19:51.772 "adrfam": "IPv4", 00:19:51.772 "traddr": "10.0.0.2", 00:19:51.772 "trsvcid": "4420" 00:19:51.772 }, 00:19:51.772 "peer_address": { 00:19:51.772 "trtype": "TCP", 00:19:51.772 "adrfam": "IPv4", 00:19:51.772 "traddr": "10.0.0.1", 00:19:51.772 "trsvcid": "34728" 00:19:51.772 }, 00:19:51.772 "auth": { 00:19:51.772 "state": "completed", 00:19:51.772 "digest": "sha256", 00:19:51.772 "dhgroup": "ffdhe3072" 00:19:51.772 } 00:19:51.772 } 00:19:51.772 ]' 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.772 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.030 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:19:52.030 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:19:52.595 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.595 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:52.595 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.595 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.595 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.595 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.595 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.595 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.852 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:52.852 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.852 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.852 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:52.852 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:52.852 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.852 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.852 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.852 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.852 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.852 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.852 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.852 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.109 00:19:53.109 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.109 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.109 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.367 { 00:19:53.367 "cntlid": 21, 00:19:53.367 "qid": 0, 00:19:53.367 "state": "enabled", 00:19:53.367 "thread": "nvmf_tgt_poll_group_000", 00:19:53.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:53.367 "listen_address": { 00:19:53.367 "trtype": "TCP", 00:19:53.367 "adrfam": "IPv4", 00:19:53.367 "traddr": "10.0.0.2", 00:19:53.367 "trsvcid": "4420" 00:19:53.367 }, 00:19:53.367 "peer_address": { 00:19:53.367 "trtype": "TCP", 00:19:53.367 "adrfam": "IPv4", 00:19:53.367 "traddr": "10.0.0.1", 00:19:53.367 "trsvcid": "34754" 00:19:53.367 }, 00:19:53.367 "auth": { 00:19:53.367 "state": "completed", 00:19:53.367 "digest": "sha256", 00:19:53.367 "dhgroup": "ffdhe3072" 00:19:53.367 } 00:19:53.367 } 00:19:53.367 ]' 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.367 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.625 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:19:53.625 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:19:54.191 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.191 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:54.191 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.191 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.191 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.191 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.191 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.191 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.449 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:54.449 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.449 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.449 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:54.449 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:54.449 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.449 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:54.449 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.449 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.449 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.449 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:54.449 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.449 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.709 00:19:54.709 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.709 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.709 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.967 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.967 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.968 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.968 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.968 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.968 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.968 { 00:19:54.968 "cntlid": 23, 00:19:54.968 "qid": 0, 00:19:54.968 "state": "enabled", 00:19:54.968 "thread": "nvmf_tgt_poll_group_000", 00:19:54.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:54.968 "listen_address": { 00:19:54.968 "trtype": "TCP", 00:19:54.968 "adrfam": "IPv4", 00:19:54.968 "traddr": "10.0.0.2", 00:19:54.968 "trsvcid": "4420" 00:19:54.968 }, 00:19:54.968 "peer_address": { 00:19:54.968 "trtype": "TCP", 00:19:54.968 "adrfam": "IPv4", 00:19:54.968 "traddr": "10.0.0.1", 00:19:54.968 "trsvcid": "34784" 00:19:54.968 }, 00:19:54.968 "auth": { 00:19:54.968 "state": "completed", 00:19:54.968 "digest": "sha256", 00:19:54.968 "dhgroup": "ffdhe3072" 00:19:54.968 } 00:19:54.968 } 00:19:54.968 ]' 00:19:54.968 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.968 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.968 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.968 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:54.968 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.968 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.968 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.968 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.226 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:19:55.226 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:19:55.888 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.888 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:55.888 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.888 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.888 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.888 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.888 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.888 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.888 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.154 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:56.154 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.154 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.154 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:56.154 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:56.154 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.154 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.154 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.154 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.154 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.154 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.154 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.154 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.413 00:19:56.413 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.413 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.413 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.413 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.413 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.413 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.413 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.413 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.413 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.413 { 00:19:56.413 "cntlid": 25, 00:19:56.413 "qid": 0, 00:19:56.413 "state": "enabled", 00:19:56.413 "thread": "nvmf_tgt_poll_group_000", 00:19:56.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:56.413 "listen_address": { 00:19:56.413 "trtype": "TCP", 00:19:56.413 "adrfam": "IPv4", 00:19:56.413 "traddr": "10.0.0.2", 00:19:56.413 "trsvcid": "4420" 00:19:56.413 }, 00:19:56.413 "peer_address": { 00:19:56.413 "trtype": "TCP", 00:19:56.413 "adrfam": "IPv4", 00:19:56.413 "traddr": "10.0.0.1", 00:19:56.413 "trsvcid": "34812" 00:19:56.413 }, 00:19:56.413 "auth": { 00:19:56.413 "state": "completed", 00:19:56.413 "digest": "sha256", 00:19:56.413 "dhgroup": "ffdhe4096" 00:19:56.413 } 00:19:56.413 } 00:19:56.413 ]' 00:19:56.413 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.672 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.672 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.672 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:56.672 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.672 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.672 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.672 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.931 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:19:56.931 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:19:57.499 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.499 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:57.499 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.499 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.499 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.499 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.499 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.499 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.758 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:57.758 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.758 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.758 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:57.758 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.758 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.758 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.758 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.758 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.758 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.758 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.758 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.758 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.016 00:19:58.016 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.016 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.016 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.016 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.016 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.016 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.016 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.275 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.275 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.275 { 00:19:58.275 "cntlid": 27, 00:19:58.275 "qid": 0, 00:19:58.275 "state": "enabled", 00:19:58.275 "thread": "nvmf_tgt_poll_group_000", 00:19:58.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:58.275 "listen_address": { 00:19:58.275 "trtype": "TCP", 00:19:58.275 "adrfam": "IPv4", 00:19:58.275 "traddr": "10.0.0.2", 00:19:58.275 "trsvcid": "4420" 00:19:58.275 }, 00:19:58.275 "peer_address": { 00:19:58.275 "trtype": "TCP", 00:19:58.275 "adrfam": "IPv4", 00:19:58.275 "traddr": "10.0.0.1", 00:19:58.275 "trsvcid": "53056" 00:19:58.275 }, 00:19:58.275 "auth": { 00:19:58.275 "state": "completed", 00:19:58.275 "digest": "sha256", 00:19:58.275 "dhgroup": "ffdhe4096" 00:19:58.275 } 00:19:58.275 } 00:19:58.275 ]' 00:19:58.275 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.275 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.275 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.275 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:58.275 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.275 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.275 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.275 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.534 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:19:58.534 13:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:19:59.120 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.120 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:59.120 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.120 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.120 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.120 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.120 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.120 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.378 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:59.378 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.378 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.378 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:59.378 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:59.378 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.378 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.378 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.378 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.378 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.378 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.378 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.378 13:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.636 00:19:59.636 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.636 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.636 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.636 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.636 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.636 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.636 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.636 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.636 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.636 { 00:19:59.636 "cntlid": 29, 00:19:59.636 "qid": 0, 00:19:59.636 "state": "enabled", 00:19:59.636 "thread": "nvmf_tgt_poll_group_000", 00:19:59.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:59.636 "listen_address": { 00:19:59.636 "trtype": "TCP", 00:19:59.636 "adrfam": "IPv4", 00:19:59.636 "traddr": "10.0.0.2", 00:19:59.636 "trsvcid": "4420" 00:19:59.636 }, 00:19:59.636 "peer_address": { 00:19:59.636 "trtype": "TCP", 00:19:59.636 "adrfam": "IPv4", 00:19:59.636 "traddr": "10.0.0.1", 00:19:59.636 "trsvcid": "53082" 00:19:59.636 }, 00:19:59.636 "auth": { 00:19:59.636 "state": "completed", 00:19:59.636 "digest": "sha256", 00:19:59.636 "dhgroup": "ffdhe4096" 00:19:59.636 } 00:19:59.636 } 00:19:59.636 ]' 00:19:59.636 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.894 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.894 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.894 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.894 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.894 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.894 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.894 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.151 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:00.151 13:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:00.718 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.718 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:00.718 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.718 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.718 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.718 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.718 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.718 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.976 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:00.976 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.976 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.976 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:00.976 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:00.976 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.976 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:00.976 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.976 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.976 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.976 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:00.976 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.976 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.234 00:20:01.234 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.234 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.234 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.234 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.234 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.234 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.234 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.493 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.493 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.493 { 00:20:01.493 "cntlid": 31, 00:20:01.493 "qid": 0, 00:20:01.493 "state": "enabled", 00:20:01.493 "thread": "nvmf_tgt_poll_group_000", 00:20:01.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:01.493 "listen_address": { 00:20:01.493 "trtype": "TCP", 00:20:01.493 "adrfam": "IPv4", 00:20:01.493 "traddr": "10.0.0.2", 00:20:01.493 "trsvcid": "4420" 00:20:01.493 }, 00:20:01.493 "peer_address": { 00:20:01.493 "trtype": "TCP", 00:20:01.493 "adrfam": "IPv4", 00:20:01.493 "traddr": "10.0.0.1", 00:20:01.493 "trsvcid": "53100" 00:20:01.493 }, 00:20:01.493 "auth": { 00:20:01.493 "state": "completed", 00:20:01.493 "digest": "sha256", 00:20:01.493 "dhgroup": "ffdhe4096" 00:20:01.493 } 00:20:01.493 } 00:20:01.493 ]' 00:20:01.493 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.493 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.493 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.493 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:01.493 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.493 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.493 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.493 13:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.751 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:01.751 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:02.317 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.317 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:02.317 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.317 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.317 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.317 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.317 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.317 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.317 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.575 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:02.575 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.575 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.575 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:02.575 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:02.575 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.575 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.575 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.575 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.575 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.575 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.575 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.575 13:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.833 00:20:02.833 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.833 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.833 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.109 { 00:20:03.109 "cntlid": 33, 00:20:03.109 "qid": 0, 00:20:03.109 "state": "enabled", 00:20:03.109 "thread": "nvmf_tgt_poll_group_000", 00:20:03.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:03.109 "listen_address": { 00:20:03.109 "trtype": "TCP", 00:20:03.109 "adrfam": "IPv4", 00:20:03.109 "traddr": "10.0.0.2", 00:20:03.109 "trsvcid": "4420" 00:20:03.109 }, 00:20:03.109 "peer_address": { 00:20:03.109 "trtype": "TCP", 00:20:03.109 "adrfam": "IPv4", 00:20:03.109 "traddr": "10.0.0.1", 00:20:03.109 "trsvcid": "53136" 00:20:03.109 }, 00:20:03.109 "auth": { 00:20:03.109 "state": "completed", 00:20:03.109 "digest": "sha256", 00:20:03.109 "dhgroup": "ffdhe6144" 00:20:03.109 } 00:20:03.109 } 00:20:03.109 ]' 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.109 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.367 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:03.367 13:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:03.933 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.933 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:03.933 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.933 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.933 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.933 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.933 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.933 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:04.191 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:04.191 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.191 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.191 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:04.191 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:04.191 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.191 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.191 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.191 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.191 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.191 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.192 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.192 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.450 00:20:04.450 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.450 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.450 13:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.708 { 00:20:04.708 "cntlid": 35, 00:20:04.708 "qid": 0, 00:20:04.708 "state": "enabled", 00:20:04.708 "thread": "nvmf_tgt_poll_group_000", 00:20:04.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:04.708 "listen_address": { 00:20:04.708 "trtype": "TCP", 00:20:04.708 "adrfam": "IPv4", 00:20:04.708 "traddr": "10.0.0.2", 00:20:04.708 "trsvcid": "4420" 00:20:04.708 }, 00:20:04.708 "peer_address": { 00:20:04.708 "trtype": "TCP", 00:20:04.708 "adrfam": "IPv4", 00:20:04.708 "traddr": "10.0.0.1", 00:20:04.708 "trsvcid": "53170" 00:20:04.708 }, 00:20:04.708 "auth": { 00:20:04.708 "state": "completed", 00:20:04.708 "digest": "sha256", 00:20:04.708 "dhgroup": "ffdhe6144" 00:20:04.708 } 00:20:04.708 } 00:20:04.708 ]' 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.708 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.967 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:04.967 13:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:05.534 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.534 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:05.534 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.534 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.534 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.534 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.534 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.534 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.793 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:05.793 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.793 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.793 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:05.793 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:05.793 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.793 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.793 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.793 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.793 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.793 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.793 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.793 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.052 00:20:06.052 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.052 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.052 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.311 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.311 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.311 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.311 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.311 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.311 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.311 { 00:20:06.311 "cntlid": 37, 00:20:06.311 "qid": 0, 00:20:06.311 "state": "enabled", 00:20:06.311 "thread": "nvmf_tgt_poll_group_000", 00:20:06.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:06.311 "listen_address": { 00:20:06.311 "trtype": "TCP", 00:20:06.311 "adrfam": "IPv4", 00:20:06.311 "traddr": "10.0.0.2", 00:20:06.311 "trsvcid": "4420" 00:20:06.311 }, 00:20:06.311 "peer_address": { 00:20:06.311 "trtype": "TCP", 00:20:06.311 "adrfam": "IPv4", 00:20:06.311 "traddr": "10.0.0.1", 00:20:06.311 "trsvcid": "53194" 00:20:06.311 }, 00:20:06.311 "auth": { 00:20:06.311 "state": "completed", 00:20:06.311 "digest": "sha256", 00:20:06.311 "dhgroup": "ffdhe6144" 00:20:06.311 } 00:20:06.311 } 00:20:06.311 ]' 00:20:06.311 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.311 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.311 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.570 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:06.570 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.570 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.570 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.570 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.828 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:06.828 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:07.395 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.396 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.962 00:20:07.962 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.962 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.962 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.962 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.962 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.962 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.962 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.962 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.962 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.962 { 00:20:07.962 "cntlid": 39, 00:20:07.962 "qid": 0, 00:20:07.962 "state": "enabled", 00:20:07.962 "thread": "nvmf_tgt_poll_group_000", 00:20:07.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:07.962 "listen_address": { 00:20:07.962 "trtype": "TCP", 00:20:07.962 "adrfam": "IPv4", 00:20:07.962 "traddr": "10.0.0.2", 00:20:07.962 "trsvcid": "4420" 00:20:07.962 }, 00:20:07.962 "peer_address": { 00:20:07.962 "trtype": "TCP", 00:20:07.962 "adrfam": "IPv4", 00:20:07.962 "traddr": "10.0.0.1", 00:20:07.962 "trsvcid": "53738" 00:20:07.962 }, 00:20:07.962 "auth": { 00:20:07.962 "state": "completed", 00:20:07.962 "digest": "sha256", 00:20:07.962 "dhgroup": "ffdhe6144" 00:20:07.962 } 00:20:07.962 } 00:20:07.962 ]' 00:20:07.962 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.219 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.219 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.219 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:08.219 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.219 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.219 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.219 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.504 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:08.504 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.071 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.072 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.072 13:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.635 00:20:09.635 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.635 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.635 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.893 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.893 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.893 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.893 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.893 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.893 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.893 { 00:20:09.893 "cntlid": 41, 00:20:09.893 "qid": 0, 00:20:09.893 "state": "enabled", 00:20:09.893 "thread": "nvmf_tgt_poll_group_000", 00:20:09.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:09.894 "listen_address": { 00:20:09.894 "trtype": "TCP", 00:20:09.894 "adrfam": "IPv4", 00:20:09.894 "traddr": "10.0.0.2", 00:20:09.894 "trsvcid": "4420" 00:20:09.894 }, 00:20:09.894 "peer_address": { 00:20:09.894 "trtype": "TCP", 00:20:09.894 "adrfam": "IPv4", 00:20:09.894 "traddr": "10.0.0.1", 00:20:09.894 "trsvcid": "53780" 00:20:09.894 }, 00:20:09.894 "auth": { 00:20:09.894 "state": "completed", 00:20:09.894 "digest": "sha256", 00:20:09.894 "dhgroup": "ffdhe8192" 00:20:09.894 } 00:20:09.894 } 00:20:09.894 ]' 00:20:09.894 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.894 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.894 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.894 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.894 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.894 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.894 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.894 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.152 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:10.152 13:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:10.719 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.719 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:10.719 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.719 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.719 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.719 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.719 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.719 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.977 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:10.977 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.977 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.977 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:10.977 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:10.977 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.977 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.977 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.977 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.977 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.977 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.977 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.977 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.544 00:20:11.544 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.544 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.544 13:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.803 { 00:20:11.803 "cntlid": 43, 00:20:11.803 "qid": 0, 00:20:11.803 "state": "enabled", 00:20:11.803 "thread": "nvmf_tgt_poll_group_000", 00:20:11.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:11.803 "listen_address": { 00:20:11.803 "trtype": "TCP", 00:20:11.803 "adrfam": "IPv4", 00:20:11.803 "traddr": "10.0.0.2", 00:20:11.803 "trsvcid": "4420" 00:20:11.803 }, 00:20:11.803 "peer_address": { 00:20:11.803 "trtype": "TCP", 00:20:11.803 "adrfam": "IPv4", 00:20:11.803 "traddr": "10.0.0.1", 00:20:11.803 "trsvcid": "53810" 00:20:11.803 }, 00:20:11.803 "auth": { 00:20:11.803 "state": "completed", 00:20:11.803 "digest": "sha256", 00:20:11.803 "dhgroup": "ffdhe8192" 00:20:11.803 } 00:20:11.803 } 00:20:11.803 ]' 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.803 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.062 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:12.062 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:12.629 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.629 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:12.629 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.629 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.629 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.629 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.629 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.629 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.887 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:12.888 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.888 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.888 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:12.888 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:12.888 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.888 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.888 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.888 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.888 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.888 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.888 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.888 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.454 00:20:13.454 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.454 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.454 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.454 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.454 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.454 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.454 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.454 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.454 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.454 { 00:20:13.454 "cntlid": 45, 00:20:13.454 "qid": 0, 00:20:13.454 "state": "enabled", 00:20:13.454 "thread": "nvmf_tgt_poll_group_000", 00:20:13.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:13.454 "listen_address": { 00:20:13.454 "trtype": "TCP", 00:20:13.454 "adrfam": "IPv4", 00:20:13.454 "traddr": "10.0.0.2", 00:20:13.454 "trsvcid": "4420" 00:20:13.454 }, 00:20:13.454 "peer_address": { 00:20:13.454 "trtype": "TCP", 00:20:13.454 "adrfam": "IPv4", 00:20:13.454 "traddr": "10.0.0.1", 00:20:13.454 "trsvcid": "53832" 00:20:13.454 }, 00:20:13.455 "auth": { 00:20:13.455 "state": "completed", 00:20:13.455 "digest": "sha256", 00:20:13.455 "dhgroup": "ffdhe8192" 00:20:13.455 } 00:20:13.455 } 00:20:13.455 ]' 00:20:13.455 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.455 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.455 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.713 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.713 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.713 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.713 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.713 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.972 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:13.972 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:14.538 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.538 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:14.538 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.538 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.538 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.538 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.538 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:14.538 13:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:14.538 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:14.538 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.538 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.538 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:14.538 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:14.538 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.538 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:14.538 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.539 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.539 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.539 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:14.539 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.539 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:15.104 00:20:15.104 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.104 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.105 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.362 { 00:20:15.362 "cntlid": 47, 00:20:15.362 "qid": 0, 00:20:15.362 "state": "enabled", 00:20:15.362 "thread": "nvmf_tgt_poll_group_000", 00:20:15.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:15.362 "listen_address": { 00:20:15.362 "trtype": "TCP", 00:20:15.362 "adrfam": "IPv4", 00:20:15.362 "traddr": "10.0.0.2", 00:20:15.362 "trsvcid": "4420" 00:20:15.362 }, 00:20:15.362 "peer_address": { 00:20:15.362 "trtype": "TCP", 00:20:15.362 "adrfam": "IPv4", 00:20:15.362 "traddr": "10.0.0.1", 00:20:15.362 "trsvcid": "53860" 00:20:15.362 }, 00:20:15.362 "auth": { 00:20:15.362 "state": "completed", 00:20:15.362 "digest": "sha256", 00:20:15.362 "dhgroup": "ffdhe8192" 00:20:15.362 } 00:20:15.362 } 00:20:15.362 ]' 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.362 13:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.619 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:15.619 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:16.185 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.185 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:16.185 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.185 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.185 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.185 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:16.185 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.185 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.185 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.185 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.443 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:16.443 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.443 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.443 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.443 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:16.443 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.443 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.443 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.443 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.443 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.443 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.443 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.443 13:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.701 00:20:16.701 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.701 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.701 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.959 { 00:20:16.959 "cntlid": 49, 00:20:16.959 "qid": 0, 00:20:16.959 "state": "enabled", 00:20:16.959 "thread": "nvmf_tgt_poll_group_000", 00:20:16.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:16.959 "listen_address": { 00:20:16.959 "trtype": "TCP", 00:20:16.959 "adrfam": "IPv4", 00:20:16.959 "traddr": "10.0.0.2", 00:20:16.959 "trsvcid": "4420" 00:20:16.959 }, 00:20:16.959 "peer_address": { 00:20:16.959 "trtype": "TCP", 00:20:16.959 "adrfam": "IPv4", 00:20:16.959 "traddr": "10.0.0.1", 00:20:16.959 "trsvcid": "53888" 00:20:16.959 }, 00:20:16.959 "auth": { 00:20:16.959 "state": "completed", 00:20:16.959 "digest": "sha384", 00:20:16.959 "dhgroup": "null" 00:20:16.959 } 00:20:16.959 } 00:20:16.959 ]' 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.959 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.217 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:17.217 13:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:17.782 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.782 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:17.782 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.782 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.782 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.782 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.782 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.782 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:18.040 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:18.040 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.040 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.040 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:18.040 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:18.040 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.040 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.040 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.040 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.040 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.040 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.040 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.040 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.298 00:20:18.298 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.298 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.298 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.557 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.557 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.557 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.557 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.557 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.557 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.557 { 00:20:18.557 "cntlid": 51, 00:20:18.557 "qid": 0, 00:20:18.557 "state": "enabled", 00:20:18.557 "thread": "nvmf_tgt_poll_group_000", 00:20:18.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:18.557 "listen_address": { 00:20:18.557 "trtype": "TCP", 00:20:18.557 "adrfam": "IPv4", 00:20:18.557 "traddr": "10.0.0.2", 00:20:18.557 "trsvcid": "4420" 00:20:18.557 }, 00:20:18.557 "peer_address": { 00:20:18.557 "trtype": "TCP", 00:20:18.557 "adrfam": "IPv4", 00:20:18.557 "traddr": "10.0.0.1", 00:20:18.557 "trsvcid": "48808" 00:20:18.557 }, 00:20:18.557 "auth": { 00:20:18.557 "state": "completed", 00:20:18.557 "digest": "sha384", 00:20:18.557 "dhgroup": "null" 00:20:18.557 } 00:20:18.557 } 00:20:18.557 ]' 00:20:18.557 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.557 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.557 13:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.557 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:18.557 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.557 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.557 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.557 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.815 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:18.815 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:19.380 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.380 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:19.380 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.380 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.380 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.380 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.380 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.380 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.638 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:19.638 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.638 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.638 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:19.638 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.638 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.638 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.638 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.638 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.638 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.638 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.638 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.638 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.897 00:20:19.897 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.897 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.897 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.897 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.897 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.897 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.897 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.897 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.897 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.897 { 00:20:19.897 "cntlid": 53, 00:20:19.897 "qid": 0, 00:20:19.897 "state": "enabled", 00:20:19.897 "thread": "nvmf_tgt_poll_group_000", 00:20:19.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:19.897 "listen_address": { 00:20:19.897 "trtype": "TCP", 00:20:19.897 "adrfam": "IPv4", 00:20:19.897 "traddr": "10.0.0.2", 00:20:19.897 "trsvcid": "4420" 00:20:19.897 }, 00:20:19.897 "peer_address": { 00:20:19.897 "trtype": "TCP", 00:20:19.897 "adrfam": "IPv4", 00:20:19.897 "traddr": "10.0.0.1", 00:20:19.897 "trsvcid": "48830" 00:20:19.897 }, 00:20:19.897 "auth": { 00:20:19.897 "state": "completed", 00:20:19.897 "digest": "sha384", 00:20:19.897 "dhgroup": "null" 00:20:19.897 } 00:20:19.897 } 00:20:19.897 ]' 00:20:19.897 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.156 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.156 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.156 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:20.156 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.156 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.156 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.156 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.414 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:20.414 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.979 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.238 00:20:21.496 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.496 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.496 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.496 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.496 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.496 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.496 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.496 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.496 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.496 { 00:20:21.496 "cntlid": 55, 00:20:21.496 "qid": 0, 00:20:21.496 "state": "enabled", 00:20:21.496 "thread": "nvmf_tgt_poll_group_000", 00:20:21.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:21.496 "listen_address": { 00:20:21.496 "trtype": "TCP", 00:20:21.496 "adrfam": "IPv4", 00:20:21.496 "traddr": "10.0.0.2", 00:20:21.496 "trsvcid": "4420" 00:20:21.496 }, 00:20:21.496 "peer_address": { 00:20:21.496 "trtype": "TCP", 00:20:21.496 "adrfam": "IPv4", 00:20:21.496 "traddr": "10.0.0.1", 00:20:21.496 "trsvcid": "48858" 00:20:21.496 }, 00:20:21.496 "auth": { 00:20:21.496 "state": "completed", 00:20:21.496 "digest": "sha384", 00:20:21.496 "dhgroup": "null" 00:20:21.496 } 00:20:21.496 } 00:20:21.496 ]' 00:20:21.497 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.754 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.754 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.754 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:21.755 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.755 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.755 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.755 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.013 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:22.013 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:22.579 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.579 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:22.579 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.579 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.579 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.579 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.579 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.579 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.579 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.579 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:22.579 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.579 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.579 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:22.579 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.579 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.579 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.579 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.579 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.579 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.579 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.579 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.579 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.841 00:20:22.841 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.841 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.841 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.099 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.099 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.099 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.099 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.099 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.099 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.099 { 00:20:23.099 "cntlid": 57, 00:20:23.099 "qid": 0, 00:20:23.099 "state": "enabled", 00:20:23.099 "thread": "nvmf_tgt_poll_group_000", 00:20:23.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:23.099 "listen_address": { 00:20:23.099 "trtype": "TCP", 00:20:23.099 "adrfam": "IPv4", 00:20:23.099 "traddr": "10.0.0.2", 00:20:23.099 "trsvcid": "4420" 00:20:23.099 }, 00:20:23.099 "peer_address": { 00:20:23.099 "trtype": "TCP", 00:20:23.099 "adrfam": "IPv4", 00:20:23.099 "traddr": "10.0.0.1", 00:20:23.099 "trsvcid": "48882" 00:20:23.099 }, 00:20:23.099 "auth": { 00:20:23.099 "state": "completed", 00:20:23.099 "digest": "sha384", 00:20:23.099 "dhgroup": "ffdhe2048" 00:20:23.099 } 00:20:23.099 } 00:20:23.099 ]' 00:20:23.099 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.099 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.099 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.099 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.099 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.356 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.356 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.356 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.356 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:23.356 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:23.922 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.922 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:23.922 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.922 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.922 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.922 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.922 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.922 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.180 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:24.180 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.180 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.180 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:24.180 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:24.180 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.180 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.180 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.180 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.180 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.181 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.181 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.181 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.439 00:20:24.439 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.439 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.439 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.697 { 00:20:24.697 "cntlid": 59, 00:20:24.697 "qid": 0, 00:20:24.697 "state": "enabled", 00:20:24.697 "thread": "nvmf_tgt_poll_group_000", 00:20:24.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:24.697 "listen_address": { 00:20:24.697 "trtype": "TCP", 00:20:24.697 "adrfam": "IPv4", 00:20:24.697 "traddr": "10.0.0.2", 00:20:24.697 "trsvcid": "4420" 00:20:24.697 }, 00:20:24.697 "peer_address": { 00:20:24.697 "trtype": "TCP", 00:20:24.697 "adrfam": "IPv4", 00:20:24.697 "traddr": "10.0.0.1", 00:20:24.697 "trsvcid": "48912" 00:20:24.697 }, 00:20:24.697 "auth": { 00:20:24.697 "state": "completed", 00:20:24.697 "digest": "sha384", 00:20:24.697 "dhgroup": "ffdhe2048" 00:20:24.697 } 00:20:24.697 } 00:20:24.697 ]' 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.697 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.955 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:24.955 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:25.521 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.521 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:25.521 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.521 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.521 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.521 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.521 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.521 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.778 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:25.778 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.778 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.778 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:25.778 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:25.778 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.778 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.778 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.778 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.778 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.778 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.778 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.778 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.036 00:20:26.036 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.036 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.036 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.294 { 00:20:26.294 "cntlid": 61, 00:20:26.294 "qid": 0, 00:20:26.294 "state": "enabled", 00:20:26.294 "thread": "nvmf_tgt_poll_group_000", 00:20:26.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:26.294 "listen_address": { 00:20:26.294 "trtype": "TCP", 00:20:26.294 "adrfam": "IPv4", 00:20:26.294 "traddr": "10.0.0.2", 00:20:26.294 "trsvcid": "4420" 00:20:26.294 }, 00:20:26.294 "peer_address": { 00:20:26.294 "trtype": "TCP", 00:20:26.294 "adrfam": "IPv4", 00:20:26.294 "traddr": "10.0.0.1", 00:20:26.294 "trsvcid": "48928" 00:20:26.294 }, 00:20:26.294 "auth": { 00:20:26.294 "state": "completed", 00:20:26.294 "digest": "sha384", 00:20:26.294 "dhgroup": "ffdhe2048" 00:20:26.294 } 00:20:26.294 } 00:20:26.294 ]' 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.294 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.553 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:26.553 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:27.136 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.136 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:27.136 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.136 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.136 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.136 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.136 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.136 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.393 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:27.393 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.393 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.393 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:27.393 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.393 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.393 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:27.393 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.393 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.393 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.393 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.393 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.393 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.651 00:20:27.651 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.651 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.651 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.909 { 00:20:27.909 "cntlid": 63, 00:20:27.909 "qid": 0, 00:20:27.909 "state": "enabled", 00:20:27.909 "thread": "nvmf_tgt_poll_group_000", 00:20:27.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:27.909 "listen_address": { 00:20:27.909 "trtype": "TCP", 00:20:27.909 "adrfam": "IPv4", 00:20:27.909 "traddr": "10.0.0.2", 00:20:27.909 "trsvcid": "4420" 00:20:27.909 }, 00:20:27.909 "peer_address": { 00:20:27.909 "trtype": "TCP", 00:20:27.909 "adrfam": "IPv4", 00:20:27.909 "traddr": "10.0.0.1", 00:20:27.909 "trsvcid": "48950" 00:20:27.909 }, 00:20:27.909 "auth": { 00:20:27.909 "state": "completed", 00:20:27.909 "digest": "sha384", 00:20:27.909 "dhgroup": "ffdhe2048" 00:20:27.909 } 00:20:27.909 } 00:20:27.909 ]' 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.909 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.166 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:28.166 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:28.731 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.731 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:28.731 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.731 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.731 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.731 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.731 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.731 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.731 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.989 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:28.989 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.989 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.989 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:28.989 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:28.989 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.989 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.989 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.989 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.989 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.989 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.989 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.989 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.248 00:20:29.248 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.248 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.248 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.248 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.506 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.506 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.506 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.506 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.506 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.506 { 00:20:29.506 "cntlid": 65, 00:20:29.506 "qid": 0, 00:20:29.506 "state": "enabled", 00:20:29.506 "thread": "nvmf_tgt_poll_group_000", 00:20:29.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:29.506 "listen_address": { 00:20:29.506 "trtype": "TCP", 00:20:29.506 "adrfam": "IPv4", 00:20:29.506 "traddr": "10.0.0.2", 00:20:29.506 "trsvcid": "4420" 00:20:29.506 }, 00:20:29.506 "peer_address": { 00:20:29.506 "trtype": "TCP", 00:20:29.506 "adrfam": "IPv4", 00:20:29.506 "traddr": "10.0.0.1", 00:20:29.506 "trsvcid": "43624" 00:20:29.506 }, 00:20:29.506 "auth": { 00:20:29.506 "state": "completed", 00:20:29.506 "digest": "sha384", 00:20:29.506 "dhgroup": "ffdhe3072" 00:20:29.506 } 00:20:29.506 } 00:20:29.506 ]' 00:20:29.506 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.506 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.506 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.506 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.506 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.506 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.506 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.506 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.764 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:29.764 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:30.331 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.331 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:30.331 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.331 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.331 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.331 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.331 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.331 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.589 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:30.590 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.590 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.590 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:30.590 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:30.590 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.590 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.590 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.590 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.590 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.590 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.590 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.590 13:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.849 00:20:30.849 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.849 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.849 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.849 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.849 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.849 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.849 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.849 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.107 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.107 { 00:20:31.107 "cntlid": 67, 00:20:31.107 "qid": 0, 00:20:31.107 "state": "enabled", 00:20:31.107 "thread": "nvmf_tgt_poll_group_000", 00:20:31.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:31.107 "listen_address": { 00:20:31.107 "trtype": "TCP", 00:20:31.107 "adrfam": "IPv4", 00:20:31.107 "traddr": "10.0.0.2", 00:20:31.107 "trsvcid": "4420" 00:20:31.107 }, 00:20:31.107 "peer_address": { 00:20:31.107 "trtype": "TCP", 00:20:31.107 "adrfam": "IPv4", 00:20:31.107 "traddr": "10.0.0.1", 00:20:31.107 "trsvcid": "43646" 00:20:31.107 }, 00:20:31.107 "auth": { 00:20:31.107 "state": "completed", 00:20:31.107 "digest": "sha384", 00:20:31.107 "dhgroup": "ffdhe3072" 00:20:31.107 } 00:20:31.107 } 00:20:31.107 ]' 00:20:31.107 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.107 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.107 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.107 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.107 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.107 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.107 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.107 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.364 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:31.364 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:31.931 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.931 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:31.931 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.931 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.931 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.931 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.931 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.931 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.189 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:32.189 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.189 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.189 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:32.189 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.189 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.189 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.189 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.189 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.189 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.189 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.189 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.189 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.189 00:20:32.446 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.446 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.446 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.447 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.447 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.447 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.447 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.447 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.447 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.447 { 00:20:32.447 "cntlid": 69, 00:20:32.447 "qid": 0, 00:20:32.447 "state": "enabled", 00:20:32.447 "thread": "nvmf_tgt_poll_group_000", 00:20:32.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:32.447 "listen_address": { 00:20:32.447 "trtype": "TCP", 00:20:32.447 "adrfam": "IPv4", 00:20:32.447 "traddr": "10.0.0.2", 00:20:32.447 "trsvcid": "4420" 00:20:32.447 }, 00:20:32.447 "peer_address": { 00:20:32.447 "trtype": "TCP", 00:20:32.447 "adrfam": "IPv4", 00:20:32.447 "traddr": "10.0.0.1", 00:20:32.447 "trsvcid": "43682" 00:20:32.447 }, 00:20:32.447 "auth": { 00:20:32.447 "state": "completed", 00:20:32.447 "digest": "sha384", 00:20:32.447 "dhgroup": "ffdhe3072" 00:20:32.447 } 00:20:32.447 } 00:20:32.447 ]' 00:20:32.447 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.713 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.713 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.713 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.713 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.713 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.713 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.713 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.970 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:32.970 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:33.592 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.592 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:33.592 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.592 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.592 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.592 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.592 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.592 13:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.592 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:33.592 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.592 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.592 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:33.592 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:33.592 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.592 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:33.592 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.592 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.592 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.592 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:33.592 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.592 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.913 00:20:33.913 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.913 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.913 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.170 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.170 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.170 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.171 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.171 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.171 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.171 { 00:20:34.171 "cntlid": 71, 00:20:34.171 "qid": 0, 00:20:34.171 "state": "enabled", 00:20:34.171 "thread": "nvmf_tgt_poll_group_000", 00:20:34.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:34.171 "listen_address": { 00:20:34.171 "trtype": "TCP", 00:20:34.171 "adrfam": "IPv4", 00:20:34.171 "traddr": "10.0.0.2", 00:20:34.171 "trsvcid": "4420" 00:20:34.171 }, 00:20:34.171 "peer_address": { 00:20:34.171 "trtype": "TCP", 00:20:34.171 "adrfam": "IPv4", 00:20:34.171 "traddr": "10.0.0.1", 00:20:34.171 "trsvcid": "43706" 00:20:34.171 }, 00:20:34.171 "auth": { 00:20:34.171 "state": "completed", 00:20:34.171 "digest": "sha384", 00:20:34.171 "dhgroup": "ffdhe3072" 00:20:34.171 } 00:20:34.171 } 00:20:34.171 ]' 00:20:34.171 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.171 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.171 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.171 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.171 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.171 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.171 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.171 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.429 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:34.429 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:34.994 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.994 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:34.994 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.994 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.994 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.994 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.994 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.994 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.994 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.252 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:35.252 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.252 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.252 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.252 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:35.252 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.252 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.252 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.252 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.252 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.252 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.252 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.252 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.511 00:20:35.511 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.511 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.511 13:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.771 { 00:20:35.771 "cntlid": 73, 00:20:35.771 "qid": 0, 00:20:35.771 "state": "enabled", 00:20:35.771 "thread": "nvmf_tgt_poll_group_000", 00:20:35.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:35.771 "listen_address": { 00:20:35.771 "trtype": "TCP", 00:20:35.771 "adrfam": "IPv4", 00:20:35.771 "traddr": "10.0.0.2", 00:20:35.771 "trsvcid": "4420" 00:20:35.771 }, 00:20:35.771 "peer_address": { 00:20:35.771 "trtype": "TCP", 00:20:35.771 "adrfam": "IPv4", 00:20:35.771 "traddr": "10.0.0.1", 00:20:35.771 "trsvcid": "43732" 00:20:35.771 }, 00:20:35.771 "auth": { 00:20:35.771 "state": "completed", 00:20:35.771 "digest": "sha384", 00:20:35.771 "dhgroup": "ffdhe4096" 00:20:35.771 } 00:20:35.771 } 00:20:35.771 ]' 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.771 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.029 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:36.029 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:36.595 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.595 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:36.595 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.595 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.595 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.595 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.595 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.595 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:36.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:36.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:36.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.853 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.112 00:20:37.112 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.112 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.112 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.371 { 00:20:37.371 "cntlid": 75, 00:20:37.371 "qid": 0, 00:20:37.371 "state": "enabled", 00:20:37.371 "thread": "nvmf_tgt_poll_group_000", 00:20:37.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:37.371 "listen_address": { 00:20:37.371 "trtype": "TCP", 00:20:37.371 "adrfam": "IPv4", 00:20:37.371 "traddr": "10.0.0.2", 00:20:37.371 "trsvcid": "4420" 00:20:37.371 }, 00:20:37.371 "peer_address": { 00:20:37.371 "trtype": "TCP", 00:20:37.371 "adrfam": "IPv4", 00:20:37.371 "traddr": "10.0.0.1", 00:20:37.371 "trsvcid": "43754" 00:20:37.371 }, 00:20:37.371 "auth": { 00:20:37.371 "state": "completed", 00:20:37.371 "digest": "sha384", 00:20:37.371 "dhgroup": "ffdhe4096" 00:20:37.371 } 00:20:37.371 } 00:20:37.371 ]' 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.371 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.629 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:37.629 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:38.196 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.196 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:38.196 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.196 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.196 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.196 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.196 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.196 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.454 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:38.454 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.454 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.454 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:38.454 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.454 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.454 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.454 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.454 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.454 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.454 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.454 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.454 13:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.712 00:20:38.712 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.712 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.712 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.970 { 00:20:38.970 "cntlid": 77, 00:20:38.970 "qid": 0, 00:20:38.970 "state": "enabled", 00:20:38.970 "thread": "nvmf_tgt_poll_group_000", 00:20:38.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:38.970 "listen_address": { 00:20:38.970 "trtype": "TCP", 00:20:38.970 "adrfam": "IPv4", 00:20:38.970 "traddr": "10.0.0.2", 00:20:38.970 "trsvcid": "4420" 00:20:38.970 }, 00:20:38.970 "peer_address": { 00:20:38.970 "trtype": "TCP", 00:20:38.970 "adrfam": "IPv4", 00:20:38.970 "traddr": "10.0.0.1", 00:20:38.970 "trsvcid": "36486" 00:20:38.970 }, 00:20:38.970 "auth": { 00:20:38.970 "state": "completed", 00:20:38.970 "digest": "sha384", 00:20:38.970 "dhgroup": "ffdhe4096" 00:20:38.970 } 00:20:38.970 } 00:20:38.970 ]' 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.970 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.228 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:39.228 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:39.792 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.792 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:39.792 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.792 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.792 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.792 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.792 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.792 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.049 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:40.049 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.049 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.049 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:40.049 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:40.049 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.049 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:40.049 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.049 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.049 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.049 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:40.049 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.049 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.306 00:20:40.306 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.306 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.306 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.564 { 00:20:40.564 "cntlid": 79, 00:20:40.564 "qid": 0, 00:20:40.564 "state": "enabled", 00:20:40.564 "thread": "nvmf_tgt_poll_group_000", 00:20:40.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:40.564 "listen_address": { 00:20:40.564 "trtype": "TCP", 00:20:40.564 "adrfam": "IPv4", 00:20:40.564 "traddr": "10.0.0.2", 00:20:40.564 "trsvcid": "4420" 00:20:40.564 }, 00:20:40.564 "peer_address": { 00:20:40.564 "trtype": "TCP", 00:20:40.564 "adrfam": "IPv4", 00:20:40.564 "traddr": "10.0.0.1", 00:20:40.564 "trsvcid": "36512" 00:20:40.564 }, 00:20:40.564 "auth": { 00:20:40.564 "state": "completed", 00:20:40.564 "digest": "sha384", 00:20:40.564 "dhgroup": "ffdhe4096" 00:20:40.564 } 00:20:40.564 } 00:20:40.564 ]' 00:20:40.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.564 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.821 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.821 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.821 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.821 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:40.821 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:41.386 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.386 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:41.386 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.386 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.386 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.386 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.386 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.386 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.386 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.644 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:41.644 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.644 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.644 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.644 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.644 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.644 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.644 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.644 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.644 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.644 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.644 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.644 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.901 00:20:42.158 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.158 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.158 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.158 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.158 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.158 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.158 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.158 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.158 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.158 { 00:20:42.158 "cntlid": 81, 00:20:42.158 "qid": 0, 00:20:42.158 "state": "enabled", 00:20:42.158 "thread": "nvmf_tgt_poll_group_000", 00:20:42.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:42.158 "listen_address": { 00:20:42.158 "trtype": "TCP", 00:20:42.158 "adrfam": "IPv4", 00:20:42.158 "traddr": "10.0.0.2", 00:20:42.158 "trsvcid": "4420" 00:20:42.158 }, 00:20:42.158 "peer_address": { 00:20:42.158 "trtype": "TCP", 00:20:42.158 "adrfam": "IPv4", 00:20:42.158 "traddr": "10.0.0.1", 00:20:42.158 "trsvcid": "36542" 00:20:42.158 }, 00:20:42.158 "auth": { 00:20:42.158 "state": "completed", 00:20:42.158 "digest": "sha384", 00:20:42.158 "dhgroup": "ffdhe6144" 00:20:42.158 } 00:20:42.158 } 00:20:42.158 ]' 00:20:42.158 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.158 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.158 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.416 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.416 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.416 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.416 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.416 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.673 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:42.673 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.237 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.804 00:20:43.804 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.804 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.804 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.804 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.804 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.804 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.804 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.804 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.804 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.804 { 00:20:43.804 "cntlid": 83, 00:20:43.804 "qid": 0, 00:20:43.804 "state": "enabled", 00:20:43.804 "thread": "nvmf_tgt_poll_group_000", 00:20:43.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:43.804 "listen_address": { 00:20:43.804 "trtype": "TCP", 00:20:43.804 "adrfam": "IPv4", 00:20:43.804 "traddr": "10.0.0.2", 00:20:43.804 "trsvcid": "4420" 00:20:43.804 }, 00:20:43.804 "peer_address": { 00:20:43.804 "trtype": "TCP", 00:20:43.804 "adrfam": "IPv4", 00:20:43.804 "traddr": "10.0.0.1", 00:20:43.804 "trsvcid": "36564" 00:20:43.804 }, 00:20:43.804 "auth": { 00:20:43.804 "state": "completed", 00:20:43.804 "digest": "sha384", 00:20:43.804 "dhgroup": "ffdhe6144" 00:20:43.804 } 00:20:43.804 } 00:20:43.804 ]' 00:20:43.804 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.063 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.063 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.063 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.063 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.063 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.063 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.063 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.321 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:44.321 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:44.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:44.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:44.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:44.888 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:44.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.889 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.456 00:20:45.456 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.456 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.456 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.456 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.456 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.456 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.456 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.456 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.456 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.456 { 00:20:45.456 "cntlid": 85, 00:20:45.456 "qid": 0, 00:20:45.456 "state": "enabled", 00:20:45.456 "thread": "nvmf_tgt_poll_group_000", 00:20:45.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:45.456 "listen_address": { 00:20:45.456 "trtype": "TCP", 00:20:45.456 "adrfam": "IPv4", 00:20:45.456 "traddr": "10.0.0.2", 00:20:45.456 "trsvcid": "4420" 00:20:45.456 }, 00:20:45.456 "peer_address": { 00:20:45.456 "trtype": "TCP", 00:20:45.456 "adrfam": "IPv4", 00:20:45.456 "traddr": "10.0.0.1", 00:20:45.456 "trsvcid": "36598" 00:20:45.456 }, 00:20:45.456 "auth": { 00:20:45.456 "state": "completed", 00:20:45.456 "digest": "sha384", 00:20:45.456 "dhgroup": "ffdhe6144" 00:20:45.456 } 00:20:45.456 } 00:20:45.456 ]' 00:20:45.456 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.714 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.714 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.714 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.714 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.714 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.714 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.714 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.972 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:45.972 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:46.538 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.538 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:46.538 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.538 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.538 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.538 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.538 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.538 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.797 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:46.797 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.797 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.797 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:46.797 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:46.797 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.797 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:46.797 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.797 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.797 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.797 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.797 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.797 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.056 00:20:47.056 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.056 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.056 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.315 { 00:20:47.315 "cntlid": 87, 00:20:47.315 "qid": 0, 00:20:47.315 "state": "enabled", 00:20:47.315 "thread": "nvmf_tgt_poll_group_000", 00:20:47.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:47.315 "listen_address": { 00:20:47.315 "trtype": "TCP", 00:20:47.315 "adrfam": "IPv4", 00:20:47.315 "traddr": "10.0.0.2", 00:20:47.315 "trsvcid": "4420" 00:20:47.315 }, 00:20:47.315 "peer_address": { 00:20:47.315 "trtype": "TCP", 00:20:47.315 "adrfam": "IPv4", 00:20:47.315 "traddr": "10.0.0.1", 00:20:47.315 "trsvcid": "36618" 00:20:47.315 }, 00:20:47.315 "auth": { 00:20:47.315 "state": "completed", 00:20:47.315 "digest": "sha384", 00:20:47.315 "dhgroup": "ffdhe6144" 00:20:47.315 } 00:20:47.315 } 00:20:47.315 ]' 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.315 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.573 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:47.573 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:48.138 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.138 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:48.138 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.138 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.138 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.138 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.138 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.138 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.138 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.397 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:48.397 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.397 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.397 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.397 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:48.397 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.397 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.397 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.397 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.397 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.397 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.397 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.397 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.962 00:20:48.962 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.962 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.962 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.962 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.962 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.962 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.962 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.962 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.962 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.962 { 00:20:48.962 "cntlid": 89, 00:20:48.962 "qid": 0, 00:20:48.962 "state": "enabled", 00:20:48.962 "thread": "nvmf_tgt_poll_group_000", 00:20:48.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:48.962 "listen_address": { 00:20:48.962 "trtype": "TCP", 00:20:48.962 "adrfam": "IPv4", 00:20:48.962 "traddr": "10.0.0.2", 00:20:48.962 "trsvcid": "4420" 00:20:48.962 }, 00:20:48.962 "peer_address": { 00:20:48.962 "trtype": "TCP", 00:20:48.962 "adrfam": "IPv4", 00:20:48.962 "traddr": "10.0.0.1", 00:20:48.962 "trsvcid": "45106" 00:20:48.962 }, 00:20:48.962 "auth": { 00:20:48.962 "state": "completed", 00:20:48.962 "digest": "sha384", 00:20:48.962 "dhgroup": "ffdhe8192" 00:20:48.962 } 00:20:48.962 } 00:20:48.962 ]' 00:20:48.962 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.962 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.962 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.219 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.219 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.219 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.219 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.219 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.477 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:49.477 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.042 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.608 00:20:50.608 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.608 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.608 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.866 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.866 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.866 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.866 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.867 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.867 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.867 { 00:20:50.867 "cntlid": 91, 00:20:50.867 "qid": 0, 00:20:50.867 "state": "enabled", 00:20:50.867 "thread": "nvmf_tgt_poll_group_000", 00:20:50.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:50.867 "listen_address": { 00:20:50.867 "trtype": "TCP", 00:20:50.867 "adrfam": "IPv4", 00:20:50.867 "traddr": "10.0.0.2", 00:20:50.867 "trsvcid": "4420" 00:20:50.867 }, 00:20:50.867 "peer_address": { 00:20:50.867 "trtype": "TCP", 00:20:50.867 "adrfam": "IPv4", 00:20:50.867 "traddr": "10.0.0.1", 00:20:50.867 "trsvcid": "45128" 00:20:50.867 }, 00:20:50.867 "auth": { 00:20:50.867 "state": "completed", 00:20:50.867 "digest": "sha384", 00:20:50.867 "dhgroup": "ffdhe8192" 00:20:50.867 } 00:20:50.867 } 00:20:50.867 ]' 00:20:50.867 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.867 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.867 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.867 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.867 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.867 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.125 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.125 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.125 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:51.125 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:51.691 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.691 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:51.691 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.691 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.691 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.691 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.691 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.691 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.950 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:51.950 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.950 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.950 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:51.950 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:51.950 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.950 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.950 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.950 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.950 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.950 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.950 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.950 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.516 00:20:52.516 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.516 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.516 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.772 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.772 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.772 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.772 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.772 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.772 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.772 { 00:20:52.772 "cntlid": 93, 00:20:52.772 "qid": 0, 00:20:52.772 "state": "enabled", 00:20:52.772 "thread": "nvmf_tgt_poll_group_000", 00:20:52.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:52.772 "listen_address": { 00:20:52.772 "trtype": "TCP", 00:20:52.772 "adrfam": "IPv4", 00:20:52.772 "traddr": "10.0.0.2", 00:20:52.772 "trsvcid": "4420" 00:20:52.772 }, 00:20:52.772 "peer_address": { 00:20:52.772 "trtype": "TCP", 00:20:52.772 "adrfam": "IPv4", 00:20:52.772 "traddr": "10.0.0.1", 00:20:52.772 "trsvcid": "45146" 00:20:52.772 }, 00:20:52.772 "auth": { 00:20:52.772 "state": "completed", 00:20:52.772 "digest": "sha384", 00:20:52.772 "dhgroup": "ffdhe8192" 00:20:52.772 } 00:20:52.772 } 00:20:52.772 ]' 00:20:52.772 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.773 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.773 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.773 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.773 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.773 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.773 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.773 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.030 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:53.030 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:53.597 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.597 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:53.597 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.597 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.597 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.597 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.597 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.597 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.856 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:53.856 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.856 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.856 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:53.856 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:53.856 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.856 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:53.856 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.856 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.856 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.856 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.856 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.856 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.114 00:20:54.114 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.114 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.114 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.372 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.372 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.372 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.372 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.372 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.372 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.372 { 00:20:54.372 "cntlid": 95, 00:20:54.372 "qid": 0, 00:20:54.372 "state": "enabled", 00:20:54.372 "thread": "nvmf_tgt_poll_group_000", 00:20:54.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:54.372 "listen_address": { 00:20:54.372 "trtype": "TCP", 00:20:54.372 "adrfam": "IPv4", 00:20:54.372 "traddr": "10.0.0.2", 00:20:54.372 "trsvcid": "4420" 00:20:54.372 }, 00:20:54.372 "peer_address": { 00:20:54.372 "trtype": "TCP", 00:20:54.372 "adrfam": "IPv4", 00:20:54.372 "traddr": "10.0.0.1", 00:20:54.372 "trsvcid": "45168" 00:20:54.372 }, 00:20:54.372 "auth": { 00:20:54.372 "state": "completed", 00:20:54.372 "digest": "sha384", 00:20:54.372 "dhgroup": "ffdhe8192" 00:20:54.372 } 00:20:54.372 } 00:20:54.372 ]' 00:20:54.372 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.372 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.372 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.630 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.630 13:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.630 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.630 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.630 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.889 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:54.889 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.455 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.455 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.455 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.455 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.455 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.713 00:20:55.713 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.713 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.713 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.971 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.971 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.971 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.971 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.971 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.971 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.971 { 00:20:55.971 "cntlid": 97, 00:20:55.971 "qid": 0, 00:20:55.971 "state": "enabled", 00:20:55.971 "thread": "nvmf_tgt_poll_group_000", 00:20:55.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:55.971 "listen_address": { 00:20:55.971 "trtype": "TCP", 00:20:55.971 "adrfam": "IPv4", 00:20:55.971 "traddr": "10.0.0.2", 00:20:55.971 "trsvcid": "4420" 00:20:55.971 }, 00:20:55.971 "peer_address": { 00:20:55.971 "trtype": "TCP", 00:20:55.971 "adrfam": "IPv4", 00:20:55.971 "traddr": "10.0.0.1", 00:20:55.971 "trsvcid": "45194" 00:20:55.971 }, 00:20:55.971 "auth": { 00:20:55.971 "state": "completed", 00:20:55.971 "digest": "sha512", 00:20:55.971 "dhgroup": "null" 00:20:55.971 } 00:20:55.971 } 00:20:55.971 ]' 00:20:55.971 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.971 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.971 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.971 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:56.230 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.230 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.230 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.230 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.230 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:56.230 13:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:20:56.797 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.056 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.314 00:20:57.314 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.314 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.314 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.576 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.576 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.576 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.576 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.576 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.576 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.576 { 00:20:57.576 "cntlid": 99, 00:20:57.576 "qid": 0, 00:20:57.576 "state": "enabled", 00:20:57.576 "thread": "nvmf_tgt_poll_group_000", 00:20:57.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:57.576 "listen_address": { 00:20:57.576 "trtype": "TCP", 00:20:57.576 "adrfam": "IPv4", 00:20:57.576 "traddr": "10.0.0.2", 00:20:57.576 "trsvcid": "4420" 00:20:57.576 }, 00:20:57.576 "peer_address": { 00:20:57.576 "trtype": "TCP", 00:20:57.576 "adrfam": "IPv4", 00:20:57.576 "traddr": "10.0.0.1", 00:20:57.576 "trsvcid": "45236" 00:20:57.576 }, 00:20:57.576 "auth": { 00:20:57.576 "state": "completed", 00:20:57.576 "digest": "sha512", 00:20:57.576 "dhgroup": "null" 00:20:57.576 } 00:20:57.576 } 00:20:57.576 ]' 00:20:57.576 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.576 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.576 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.576 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:57.576 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.833 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.833 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.833 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.833 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:57.833 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:20:58.398 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.398 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:58.398 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.398 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.656 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.656 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.656 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.656 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.656 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:58.656 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.656 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.656 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:58.656 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:58.656 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.656 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.656 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.656 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.656 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.656 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.656 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.656 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.913 00:20:58.913 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.913 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.913 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.170 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.170 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.170 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.170 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.170 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.170 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.170 { 00:20:59.170 "cntlid": 101, 00:20:59.170 "qid": 0, 00:20:59.170 "state": "enabled", 00:20:59.170 "thread": "nvmf_tgt_poll_group_000", 00:20:59.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:59.170 "listen_address": { 00:20:59.170 "trtype": "TCP", 00:20:59.170 "adrfam": "IPv4", 00:20:59.170 "traddr": "10.0.0.2", 00:20:59.170 "trsvcid": "4420" 00:20:59.170 }, 00:20:59.170 "peer_address": { 00:20:59.170 "trtype": "TCP", 00:20:59.170 "adrfam": "IPv4", 00:20:59.170 "traddr": "10.0.0.1", 00:20:59.170 "trsvcid": "44276" 00:20:59.170 }, 00:20:59.170 "auth": { 00:20:59.170 "state": "completed", 00:20:59.170 "digest": "sha512", 00:20:59.170 "dhgroup": "null" 00:20:59.170 } 00:20:59.170 } 00:20:59.170 ]' 00:20:59.170 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.170 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.170 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.170 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:59.170 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.428 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.428 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.428 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.428 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:59.428 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:20:59.995 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.995 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:59.995 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.995 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.995 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.995 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.995 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.995 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.253 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:00.253 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.253 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.253 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:00.253 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:00.253 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.253 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:00.253 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.253 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.253 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.253 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:00.253 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.253 13:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.511 00:21:00.511 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.511 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.511 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.769 { 00:21:00.769 "cntlid": 103, 00:21:00.769 "qid": 0, 00:21:00.769 "state": "enabled", 00:21:00.769 "thread": "nvmf_tgt_poll_group_000", 00:21:00.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:00.769 "listen_address": { 00:21:00.769 "trtype": "TCP", 00:21:00.769 "adrfam": "IPv4", 00:21:00.769 "traddr": "10.0.0.2", 00:21:00.769 "trsvcid": "4420" 00:21:00.769 }, 00:21:00.769 "peer_address": { 00:21:00.769 "trtype": "TCP", 00:21:00.769 "adrfam": "IPv4", 00:21:00.769 "traddr": "10.0.0.1", 00:21:00.769 "trsvcid": "44308" 00:21:00.769 }, 00:21:00.769 "auth": { 00:21:00.769 "state": "completed", 00:21:00.769 "digest": "sha512", 00:21:00.769 "dhgroup": "null" 00:21:00.769 } 00:21:00.769 } 00:21:00.769 ]' 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.769 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.027 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:01.027 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:01.592 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.593 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:01.593 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.593 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.593 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.593 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.593 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.593 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.593 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.851 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:01.851 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.851 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.851 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:01.851 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:01.851 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.851 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.851 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.851 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.851 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.851 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.851 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.851 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.110 00:21:02.110 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.110 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.110 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.369 { 00:21:02.369 "cntlid": 105, 00:21:02.369 "qid": 0, 00:21:02.369 "state": "enabled", 00:21:02.369 "thread": "nvmf_tgt_poll_group_000", 00:21:02.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:02.369 "listen_address": { 00:21:02.369 "trtype": "TCP", 00:21:02.369 "adrfam": "IPv4", 00:21:02.369 "traddr": "10.0.0.2", 00:21:02.369 "trsvcid": "4420" 00:21:02.369 }, 00:21:02.369 "peer_address": { 00:21:02.369 "trtype": "TCP", 00:21:02.369 "adrfam": "IPv4", 00:21:02.369 "traddr": "10.0.0.1", 00:21:02.369 "trsvcid": "44340" 00:21:02.369 }, 00:21:02.369 "auth": { 00:21:02.369 "state": "completed", 00:21:02.369 "digest": "sha512", 00:21:02.369 "dhgroup": "ffdhe2048" 00:21:02.369 } 00:21:02.369 } 00:21:02.369 ]' 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.369 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.628 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:21:02.628 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:21:03.194 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.194 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:03.194 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.194 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.194 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.194 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.194 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.452 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:03.452 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.452 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.452 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:03.452 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:03.452 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.452 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.452 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.452 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.452 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.452 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.452 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.452 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.710 00:21:03.710 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.710 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.710 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.968 { 00:21:03.968 "cntlid": 107, 00:21:03.968 "qid": 0, 00:21:03.968 "state": "enabled", 00:21:03.968 "thread": "nvmf_tgt_poll_group_000", 00:21:03.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:03.968 "listen_address": { 00:21:03.968 "trtype": "TCP", 00:21:03.968 "adrfam": "IPv4", 00:21:03.968 "traddr": "10.0.0.2", 00:21:03.968 "trsvcid": "4420" 00:21:03.968 }, 00:21:03.968 "peer_address": { 00:21:03.968 "trtype": "TCP", 00:21:03.968 "adrfam": "IPv4", 00:21:03.968 "traddr": "10.0.0.1", 00:21:03.968 "trsvcid": "44362" 00:21:03.968 }, 00:21:03.968 "auth": { 00:21:03.968 "state": "completed", 00:21:03.968 "digest": "sha512", 00:21:03.968 "dhgroup": "ffdhe2048" 00:21:03.968 } 00:21:03.968 } 00:21:03.968 ]' 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.968 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.226 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:21:04.226 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:21:04.792 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.792 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:04.792 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.792 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.792 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.792 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.792 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.792 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.051 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:05.051 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.051 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.051 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:05.051 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.051 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.051 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.051 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.051 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.051 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.051 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.051 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.051 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.309 00:21:05.309 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.309 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.309 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.309 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.309 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.309 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.309 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.309 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.309 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.309 { 00:21:05.309 "cntlid": 109, 00:21:05.309 "qid": 0, 00:21:05.309 "state": "enabled", 00:21:05.309 "thread": "nvmf_tgt_poll_group_000", 00:21:05.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:05.309 "listen_address": { 00:21:05.309 "trtype": "TCP", 00:21:05.309 "adrfam": "IPv4", 00:21:05.309 "traddr": "10.0.0.2", 00:21:05.309 "trsvcid": "4420" 00:21:05.309 }, 00:21:05.309 "peer_address": { 00:21:05.309 "trtype": "TCP", 00:21:05.309 "adrfam": "IPv4", 00:21:05.309 "traddr": "10.0.0.1", 00:21:05.309 "trsvcid": "44386" 00:21:05.309 }, 00:21:05.309 "auth": { 00:21:05.310 "state": "completed", 00:21:05.310 "digest": "sha512", 00:21:05.310 "dhgroup": "ffdhe2048" 00:21:05.310 } 00:21:05.310 } 00:21:05.310 ]' 00:21:05.310 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.567 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.567 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.567 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.567 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.567 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.567 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.567 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.835 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:21:05.835 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.401 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.659 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.659 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.659 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.659 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.659 00:21:06.659 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.659 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.659 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.917 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.917 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.917 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.917 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.917 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.917 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.917 { 00:21:06.917 "cntlid": 111, 00:21:06.917 "qid": 0, 00:21:06.917 "state": "enabled", 00:21:06.917 "thread": "nvmf_tgt_poll_group_000", 00:21:06.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:06.917 "listen_address": { 00:21:06.917 "trtype": "TCP", 00:21:06.917 "adrfam": "IPv4", 00:21:06.917 "traddr": "10.0.0.2", 00:21:06.917 "trsvcid": "4420" 00:21:06.917 }, 00:21:06.917 "peer_address": { 00:21:06.917 "trtype": "TCP", 00:21:06.917 "adrfam": "IPv4", 00:21:06.917 "traddr": "10.0.0.1", 00:21:06.917 "trsvcid": "44406" 00:21:06.917 }, 00:21:06.917 "auth": { 00:21:06.917 "state": "completed", 00:21:06.917 "digest": "sha512", 00:21:06.917 "dhgroup": "ffdhe2048" 00:21:06.917 } 00:21:06.917 } 00:21:06.917 ]' 00:21:06.917 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.917 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.917 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.175 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.175 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.175 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.175 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.175 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.175 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:07.175 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:07.740 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.998 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.260 00:21:08.260 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.260 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.260 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.518 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.518 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.518 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.518 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.518 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.518 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.518 { 00:21:08.518 "cntlid": 113, 00:21:08.518 "qid": 0, 00:21:08.518 "state": "enabled", 00:21:08.518 "thread": "nvmf_tgt_poll_group_000", 00:21:08.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:08.518 "listen_address": { 00:21:08.518 "trtype": "TCP", 00:21:08.518 "adrfam": "IPv4", 00:21:08.518 "traddr": "10.0.0.2", 00:21:08.518 "trsvcid": "4420" 00:21:08.518 }, 00:21:08.518 "peer_address": { 00:21:08.518 "trtype": "TCP", 00:21:08.518 "adrfam": "IPv4", 00:21:08.518 "traddr": "10.0.0.1", 00:21:08.518 "trsvcid": "42394" 00:21:08.518 }, 00:21:08.518 "auth": { 00:21:08.518 "state": "completed", 00:21:08.518 "digest": "sha512", 00:21:08.518 "dhgroup": "ffdhe3072" 00:21:08.518 } 00:21:08.518 } 00:21:08.518 ]' 00:21:08.518 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.518 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.518 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.518 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.518 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.776 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.776 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.776 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.776 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:21:08.776 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:21:09.340 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.599 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:09.599 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.599 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.599 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.599 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.599 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.599 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.599 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:09.599 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.599 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.599 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:09.599 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:09.599 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.599 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.599 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.599 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.599 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.599 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.599 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.599 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.857 00:21:09.857 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.857 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.857 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.115 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.115 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.115 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.115 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.115 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.115 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.115 { 00:21:10.115 "cntlid": 115, 00:21:10.115 "qid": 0, 00:21:10.115 "state": "enabled", 00:21:10.115 "thread": "nvmf_tgt_poll_group_000", 00:21:10.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:10.116 "listen_address": { 00:21:10.116 "trtype": "TCP", 00:21:10.116 "adrfam": "IPv4", 00:21:10.116 "traddr": "10.0.0.2", 00:21:10.116 "trsvcid": "4420" 00:21:10.116 }, 00:21:10.116 "peer_address": { 00:21:10.116 "trtype": "TCP", 00:21:10.116 "adrfam": "IPv4", 00:21:10.116 "traddr": "10.0.0.1", 00:21:10.116 "trsvcid": "42424" 00:21:10.116 }, 00:21:10.116 "auth": { 00:21:10.116 "state": "completed", 00:21:10.116 "digest": "sha512", 00:21:10.116 "dhgroup": "ffdhe3072" 00:21:10.116 } 00:21:10.116 } 00:21:10.116 ]' 00:21:10.116 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.116 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.116 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.116 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.116 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.374 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.374 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.374 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.374 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:21:10.374 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:21:10.988 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.988 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:10.988 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.988 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.988 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.988 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.988 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.988 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.288 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:11.288 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.288 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.288 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.288 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:11.288 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.288 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.288 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.288 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.288 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.288 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.288 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.288 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.546 00:21:11.546 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.546 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.546 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.804 { 00:21:11.804 "cntlid": 117, 00:21:11.804 "qid": 0, 00:21:11.804 "state": "enabled", 00:21:11.804 "thread": "nvmf_tgt_poll_group_000", 00:21:11.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:11.804 "listen_address": { 00:21:11.804 "trtype": "TCP", 00:21:11.804 "adrfam": "IPv4", 00:21:11.804 "traddr": "10.0.0.2", 00:21:11.804 "trsvcid": "4420" 00:21:11.804 }, 00:21:11.804 "peer_address": { 00:21:11.804 "trtype": "TCP", 00:21:11.804 "adrfam": "IPv4", 00:21:11.804 "traddr": "10.0.0.1", 00:21:11.804 "trsvcid": "42454" 00:21:11.804 }, 00:21:11.804 "auth": { 00:21:11.804 "state": "completed", 00:21:11.804 "digest": "sha512", 00:21:11.804 "dhgroup": "ffdhe3072" 00:21:11.804 } 00:21:11.804 } 00:21:11.804 ]' 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.804 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.063 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:21:12.063 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:21:12.628 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.628 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:12.628 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.628 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.628 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.628 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.628 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.628 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.886 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:12.886 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.886 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.886 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:12.886 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:12.886 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.886 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:12.886 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.886 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.886 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.886 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:12.886 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.886 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.142 00:21:13.142 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.142 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.143 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.400 { 00:21:13.400 "cntlid": 119, 00:21:13.400 "qid": 0, 00:21:13.400 "state": "enabled", 00:21:13.400 "thread": "nvmf_tgt_poll_group_000", 00:21:13.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:13.400 "listen_address": { 00:21:13.400 "trtype": "TCP", 00:21:13.400 "adrfam": "IPv4", 00:21:13.400 "traddr": "10.0.0.2", 00:21:13.400 "trsvcid": "4420" 00:21:13.400 }, 00:21:13.400 "peer_address": { 00:21:13.400 "trtype": "TCP", 00:21:13.400 "adrfam": "IPv4", 00:21:13.400 "traddr": "10.0.0.1", 00:21:13.400 "trsvcid": "42486" 00:21:13.400 }, 00:21:13.400 "auth": { 00:21:13.400 "state": "completed", 00:21:13.400 "digest": "sha512", 00:21:13.400 "dhgroup": "ffdhe3072" 00:21:13.400 } 00:21:13.400 } 00:21:13.400 ]' 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.400 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.657 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:13.657 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:14.222 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.222 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:14.222 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.222 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.222 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.222 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.222 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.222 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:14.222 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:14.480 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:14.480 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.480 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.480 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:14.480 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:14.480 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.480 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.480 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.480 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.480 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.480 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.480 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.480 13:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.738 00:21:14.738 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.738 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.738 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.995 { 00:21:14.995 "cntlid": 121, 00:21:14.995 "qid": 0, 00:21:14.995 "state": "enabled", 00:21:14.995 "thread": "nvmf_tgt_poll_group_000", 00:21:14.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:14.995 "listen_address": { 00:21:14.995 "trtype": "TCP", 00:21:14.995 "adrfam": "IPv4", 00:21:14.995 "traddr": "10.0.0.2", 00:21:14.995 "trsvcid": "4420" 00:21:14.995 }, 00:21:14.995 "peer_address": { 00:21:14.995 "trtype": "TCP", 00:21:14.995 "adrfam": "IPv4", 00:21:14.995 "traddr": "10.0.0.1", 00:21:14.995 "trsvcid": "42510" 00:21:14.995 }, 00:21:14.995 "auth": { 00:21:14.995 "state": "completed", 00:21:14.995 "digest": "sha512", 00:21:14.995 "dhgroup": "ffdhe4096" 00:21:14.995 } 00:21:14.995 } 00:21:14.995 ]' 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.995 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.252 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:21:15.252 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:21:15.817 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.817 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:15.817 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.817 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.817 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.817 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.817 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.817 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:16.074 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:16.074 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.074 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.074 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:16.074 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:16.074 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.074 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.074 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.074 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.074 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.074 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.074 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.074 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.331 00:21:16.331 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.331 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.331 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.331 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.589 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.589 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.589 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.589 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.589 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.589 { 00:21:16.589 "cntlid": 123, 00:21:16.589 "qid": 0, 00:21:16.589 "state": "enabled", 00:21:16.589 "thread": "nvmf_tgt_poll_group_000", 00:21:16.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:16.589 "listen_address": { 00:21:16.589 "trtype": "TCP", 00:21:16.589 "adrfam": "IPv4", 00:21:16.589 "traddr": "10.0.0.2", 00:21:16.589 "trsvcid": "4420" 00:21:16.589 }, 00:21:16.589 "peer_address": { 00:21:16.589 "trtype": "TCP", 00:21:16.589 "adrfam": "IPv4", 00:21:16.589 "traddr": "10.0.0.1", 00:21:16.589 "trsvcid": "42550" 00:21:16.589 }, 00:21:16.589 "auth": { 00:21:16.589 "state": "completed", 00:21:16.589 "digest": "sha512", 00:21:16.589 "dhgroup": "ffdhe4096" 00:21:16.589 } 00:21:16.589 } 00:21:16.589 ]' 00:21:16.589 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.589 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.589 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.589 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.589 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.590 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.590 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.590 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.847 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:21:16.847 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:21:17.412 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.413 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:17.413 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.413 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.413 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.413 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.413 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.413 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.670 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:17.670 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.670 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.670 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.670 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:17.670 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.670 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.670 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.670 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.670 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.670 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.670 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.670 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.928 00:21:17.928 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.928 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.928 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.186 { 00:21:18.186 "cntlid": 125, 00:21:18.186 "qid": 0, 00:21:18.186 "state": "enabled", 00:21:18.186 "thread": "nvmf_tgt_poll_group_000", 00:21:18.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:18.186 "listen_address": { 00:21:18.186 "trtype": "TCP", 00:21:18.186 "adrfam": "IPv4", 00:21:18.186 "traddr": "10.0.0.2", 00:21:18.186 "trsvcid": "4420" 00:21:18.186 }, 00:21:18.186 "peer_address": { 00:21:18.186 "trtype": "TCP", 00:21:18.186 "adrfam": "IPv4", 00:21:18.186 "traddr": "10.0.0.1", 00:21:18.186 "trsvcid": "37838" 00:21:18.186 }, 00:21:18.186 "auth": { 00:21:18.186 "state": "completed", 00:21:18.186 "digest": "sha512", 00:21:18.186 "dhgroup": "ffdhe4096" 00:21:18.186 } 00:21:18.186 } 00:21:18.186 ]' 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.186 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.443 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:21:18.443 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:21:19.009 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.009 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:19.009 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.009 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.009 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.009 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.009 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.009 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.267 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:19.267 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.267 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.267 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:19.267 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:19.267 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.267 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:19.267 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.267 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.267 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.267 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:19.267 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.267 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.524 00:21:19.524 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.524 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.524 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.784 { 00:21:19.784 "cntlid": 127, 00:21:19.784 "qid": 0, 00:21:19.784 "state": "enabled", 00:21:19.784 "thread": "nvmf_tgt_poll_group_000", 00:21:19.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:19.784 "listen_address": { 00:21:19.784 "trtype": "TCP", 00:21:19.784 "adrfam": "IPv4", 00:21:19.784 "traddr": "10.0.0.2", 00:21:19.784 "trsvcid": "4420" 00:21:19.784 }, 00:21:19.784 "peer_address": { 00:21:19.784 "trtype": "TCP", 00:21:19.784 "adrfam": "IPv4", 00:21:19.784 "traddr": "10.0.0.1", 00:21:19.784 "trsvcid": "37850" 00:21:19.784 }, 00:21:19.784 "auth": { 00:21:19.784 "state": "completed", 00:21:19.784 "digest": "sha512", 00:21:19.784 "dhgroup": "ffdhe4096" 00:21:19.784 } 00:21:19.784 } 00:21:19.784 ]' 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.784 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.096 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:20.096 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.661 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.919 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.919 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.919 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.919 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.177 00:21:21.177 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.177 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.177 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.435 { 00:21:21.435 "cntlid": 129, 00:21:21.435 "qid": 0, 00:21:21.435 "state": "enabled", 00:21:21.435 "thread": "nvmf_tgt_poll_group_000", 00:21:21.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:21.435 "listen_address": { 00:21:21.435 "trtype": "TCP", 00:21:21.435 "adrfam": "IPv4", 00:21:21.435 "traddr": "10.0.0.2", 00:21:21.435 "trsvcid": "4420" 00:21:21.435 }, 00:21:21.435 "peer_address": { 00:21:21.435 "trtype": "TCP", 00:21:21.435 "adrfam": "IPv4", 00:21:21.435 "traddr": "10.0.0.1", 00:21:21.435 "trsvcid": "37872" 00:21:21.435 }, 00:21:21.435 "auth": { 00:21:21.435 "state": "completed", 00:21:21.435 "digest": "sha512", 00:21:21.435 "dhgroup": "ffdhe6144" 00:21:21.435 } 00:21:21.435 } 00:21:21.435 ]' 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.435 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.692 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:21:21.692 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:21:22.258 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.258 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:22.258 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.258 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.258 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.258 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.258 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:22.258 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:22.516 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:22.516 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.516 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.516 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:22.516 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.516 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.516 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.516 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.516 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.516 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.516 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.516 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.516 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.774 00:21:22.774 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.774 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.774 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.032 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.032 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.032 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.032 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.032 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.033 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.033 { 00:21:23.033 "cntlid": 131, 00:21:23.033 "qid": 0, 00:21:23.033 "state": "enabled", 00:21:23.033 "thread": "nvmf_tgt_poll_group_000", 00:21:23.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:23.033 "listen_address": { 00:21:23.033 "trtype": "TCP", 00:21:23.033 "adrfam": "IPv4", 00:21:23.033 "traddr": "10.0.0.2", 00:21:23.033 "trsvcid": "4420" 00:21:23.033 }, 00:21:23.033 "peer_address": { 00:21:23.033 "trtype": "TCP", 00:21:23.033 "adrfam": "IPv4", 00:21:23.033 "traddr": "10.0.0.1", 00:21:23.033 "trsvcid": "37894" 00:21:23.033 }, 00:21:23.033 "auth": { 00:21:23.033 "state": "completed", 00:21:23.033 "digest": "sha512", 00:21:23.033 "dhgroup": "ffdhe6144" 00:21:23.033 } 00:21:23.033 } 00:21:23.033 ]' 00:21:23.033 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.033 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.033 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.033 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:23.033 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.033 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.033 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.033 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.289 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:21:23.289 13:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:21:23.855 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.855 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:23.855 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.855 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.855 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.855 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.855 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.855 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.115 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:24.115 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.115 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.115 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:24.115 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.115 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.115 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.115 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.115 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.115 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.115 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.115 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.115 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.374 00:21:24.633 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.633 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.633 13:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.633 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.633 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.633 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.633 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.633 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.633 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.633 { 00:21:24.633 "cntlid": 133, 00:21:24.633 "qid": 0, 00:21:24.633 "state": "enabled", 00:21:24.633 "thread": "nvmf_tgt_poll_group_000", 00:21:24.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:24.633 "listen_address": { 00:21:24.633 "trtype": "TCP", 00:21:24.633 "adrfam": "IPv4", 00:21:24.633 "traddr": "10.0.0.2", 00:21:24.633 "trsvcid": "4420" 00:21:24.633 }, 00:21:24.633 "peer_address": { 00:21:24.633 "trtype": "TCP", 00:21:24.633 "adrfam": "IPv4", 00:21:24.633 "traddr": "10.0.0.1", 00:21:24.633 "trsvcid": "37930" 00:21:24.633 }, 00:21:24.633 "auth": { 00:21:24.633 "state": "completed", 00:21:24.633 "digest": "sha512", 00:21:24.633 "dhgroup": "ffdhe6144" 00:21:24.633 } 00:21:24.633 } 00:21:24.633 ]' 00:21:24.633 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.633 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.633 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.891 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.891 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.891 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.891 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.891 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.150 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:21:25.150 13:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.717 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.284 00:21:26.284 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.284 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.284 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.284 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.284 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.284 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.284 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.284 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.284 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.284 { 00:21:26.284 "cntlid": 135, 00:21:26.284 "qid": 0, 00:21:26.284 "state": "enabled", 00:21:26.284 "thread": "nvmf_tgt_poll_group_000", 00:21:26.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:26.284 "listen_address": { 00:21:26.284 "trtype": "TCP", 00:21:26.284 "adrfam": "IPv4", 00:21:26.284 "traddr": "10.0.0.2", 00:21:26.284 "trsvcid": "4420" 00:21:26.284 }, 00:21:26.284 "peer_address": { 00:21:26.284 "trtype": "TCP", 00:21:26.284 "adrfam": "IPv4", 00:21:26.284 "traddr": "10.0.0.1", 00:21:26.284 "trsvcid": "37978" 00:21:26.284 }, 00:21:26.284 "auth": { 00:21:26.284 "state": "completed", 00:21:26.284 "digest": "sha512", 00:21:26.284 "dhgroup": "ffdhe6144" 00:21:26.284 } 00:21:26.284 } 00:21:26.284 ]' 00:21:26.284 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.542 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.542 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.542 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.542 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.542 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.542 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.542 13:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.800 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:26.800 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.367 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.933 00:21:27.933 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.933 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.933 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.191 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.191 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.191 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.191 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.191 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.191 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.191 { 00:21:28.191 "cntlid": 137, 00:21:28.191 "qid": 0, 00:21:28.191 "state": "enabled", 00:21:28.191 "thread": "nvmf_tgt_poll_group_000", 00:21:28.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:28.191 "listen_address": { 00:21:28.191 "trtype": "TCP", 00:21:28.191 "adrfam": "IPv4", 00:21:28.191 "traddr": "10.0.0.2", 00:21:28.191 "trsvcid": "4420" 00:21:28.191 }, 00:21:28.191 "peer_address": { 00:21:28.191 "trtype": "TCP", 00:21:28.191 "adrfam": "IPv4", 00:21:28.191 "traddr": "10.0.0.1", 00:21:28.191 "trsvcid": "51686" 00:21:28.191 }, 00:21:28.191 "auth": { 00:21:28.191 "state": "completed", 00:21:28.191 "digest": "sha512", 00:21:28.191 "dhgroup": "ffdhe8192" 00:21:28.191 } 00:21:28.191 } 00:21:28.191 ]' 00:21:28.192 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.192 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.192 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.192 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:28.192 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.192 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.192 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.192 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.448 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:21:28.448 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:21:29.011 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.011 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:29.011 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.011 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.011 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.011 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.011 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:29.011 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:29.269 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:29.269 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.269 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.269 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:29.269 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:29.269 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.269 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.269 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.269 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.269 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.269 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.269 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.269 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.834 00:21:29.834 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.834 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.834 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.093 { 00:21:30.093 "cntlid": 139, 00:21:30.093 "qid": 0, 00:21:30.093 "state": "enabled", 00:21:30.093 "thread": "nvmf_tgt_poll_group_000", 00:21:30.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:30.093 "listen_address": { 00:21:30.093 "trtype": "TCP", 00:21:30.093 "adrfam": "IPv4", 00:21:30.093 "traddr": "10.0.0.2", 00:21:30.093 "trsvcid": "4420" 00:21:30.093 }, 00:21:30.093 "peer_address": { 00:21:30.093 "trtype": "TCP", 00:21:30.093 "adrfam": "IPv4", 00:21:30.093 "traddr": "10.0.0.1", 00:21:30.093 "trsvcid": "51706" 00:21:30.093 }, 00:21:30.093 "auth": { 00:21:30.093 "state": "completed", 00:21:30.093 "digest": "sha512", 00:21:30.093 "dhgroup": "ffdhe8192" 00:21:30.093 } 00:21:30.093 } 00:21:30.093 ]' 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.093 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.353 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:21:30.353 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: --dhchap-ctrl-secret DHHC-1:02:ZTNjMDU1NGZiODBhMDE4ZWM2MTRkOTkxMzBjMWJkNDcwM2VjZDI3N2QwZmQ3MjJi0L9HAw==: 00:21:30.925 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.925 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:30.925 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.925 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.925 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.925 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.925 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.925 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.183 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:31.183 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.183 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.183 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:31.183 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.183 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.183 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.183 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.183 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.183 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.183 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.183 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.183 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.442 00:21:31.702 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.702 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.702 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.702 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.702 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.702 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.702 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.702 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.702 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.702 { 00:21:31.702 "cntlid": 141, 00:21:31.702 "qid": 0, 00:21:31.702 "state": "enabled", 00:21:31.702 "thread": "nvmf_tgt_poll_group_000", 00:21:31.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:31.702 "listen_address": { 00:21:31.702 "trtype": "TCP", 00:21:31.702 "adrfam": "IPv4", 00:21:31.702 "traddr": "10.0.0.2", 00:21:31.702 "trsvcid": "4420" 00:21:31.702 }, 00:21:31.702 "peer_address": { 00:21:31.702 "trtype": "TCP", 00:21:31.702 "adrfam": "IPv4", 00:21:31.702 "traddr": "10.0.0.1", 00:21:31.702 "trsvcid": "51736" 00:21:31.702 }, 00:21:31.702 "auth": { 00:21:31.702 "state": "completed", 00:21:31.702 "digest": "sha512", 00:21:31.702 "dhgroup": "ffdhe8192" 00:21:31.702 } 00:21:31.702 } 00:21:31.702 ]' 00:21:31.702 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.961 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.961 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.961 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.961 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.961 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.961 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.961 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.219 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:21:32.219 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:01:OWIzMDlkMWU0NTMwZWUxNzM4NWFkZjViMTc1NTczZjUn63Nq: 00:21:32.785 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.785 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:32.785 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.785 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.785 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.785 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.785 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:32.785 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.044 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:33.044 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.044 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.044 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:33.044 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:33.044 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.044 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:33.044 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.044 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.044 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.044 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.044 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.044 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.301 00:21:33.301 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.301 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.301 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.560 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.560 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.560 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.560 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.560 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.560 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.560 { 00:21:33.560 "cntlid": 143, 00:21:33.560 "qid": 0, 00:21:33.560 "state": "enabled", 00:21:33.560 "thread": "nvmf_tgt_poll_group_000", 00:21:33.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:33.560 "listen_address": { 00:21:33.560 "trtype": "TCP", 00:21:33.560 "adrfam": "IPv4", 00:21:33.560 "traddr": "10.0.0.2", 00:21:33.560 "trsvcid": "4420" 00:21:33.560 }, 00:21:33.560 "peer_address": { 00:21:33.560 "trtype": "TCP", 00:21:33.560 "adrfam": "IPv4", 00:21:33.560 "traddr": "10.0.0.1", 00:21:33.560 "trsvcid": "51758" 00:21:33.560 }, 00:21:33.560 "auth": { 00:21:33.560 "state": "completed", 00:21:33.560 "digest": "sha512", 00:21:33.560 "dhgroup": "ffdhe8192" 00:21:33.560 } 00:21:33.560 } 00:21:33.560 ]' 00:21:33.560 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.560 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.560 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.818 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.818 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.818 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.818 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.819 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.077 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:34.077 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:34.644 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.644 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:34.644 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.644 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.644 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.644 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:34.644 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:34.644 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:34.644 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.644 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.645 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.308 00:21:35.308 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.308 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.308 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.308 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.308 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.308 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.308 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.308 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.308 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.308 { 00:21:35.308 "cntlid": 145, 00:21:35.308 "qid": 0, 00:21:35.308 "state": "enabled", 00:21:35.308 "thread": "nvmf_tgt_poll_group_000", 00:21:35.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:35.308 "listen_address": { 00:21:35.308 "trtype": "TCP", 00:21:35.308 "adrfam": "IPv4", 00:21:35.308 "traddr": "10.0.0.2", 00:21:35.308 "trsvcid": "4420" 00:21:35.308 }, 00:21:35.308 "peer_address": { 00:21:35.308 "trtype": "TCP", 00:21:35.308 "adrfam": "IPv4", 00:21:35.308 "traddr": "10.0.0.1", 00:21:35.308 "trsvcid": "51788" 00:21:35.308 }, 00:21:35.308 "auth": { 00:21:35.308 "state": "completed", 00:21:35.308 "digest": "sha512", 00:21:35.308 "dhgroup": "ffdhe8192" 00:21:35.308 } 00:21:35.308 } 00:21:35.308 ]' 00:21:35.308 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.565 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.565 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.565 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.565 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.565 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.565 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.565 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.823 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:21:35.823 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWExODllZTM1ZTFjYzBmZmZiOGEyOTAzY2Q4ZTZmY2Y5ZWZkYjlmODg1ZTBlNDkws6FgHg==: --dhchap-ctrl-secret DHHC-1:03:ZTUwZGJiM2ZjMDVkMGY3MjcxYWI2MDNkM2E0ODQxYTlhZjkyYzIxOGYwYjdmNjE1MGNhNjFlYjk3NWNhNzg5M0uOnqg=: 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:36.393 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:36.959 request: 00:21:36.959 { 00:21:36.959 "name": "nvme0", 00:21:36.959 "trtype": "tcp", 00:21:36.959 "traddr": "10.0.0.2", 00:21:36.959 "adrfam": "ipv4", 00:21:36.959 "trsvcid": "4420", 00:21:36.959 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:36.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:36.959 "prchk_reftag": false, 00:21:36.959 "prchk_guard": false, 00:21:36.960 "hdgst": false, 00:21:36.960 "ddgst": false, 00:21:36.960 "dhchap_key": "key2", 00:21:36.960 "allow_unrecognized_csi": false, 00:21:36.960 "method": "bdev_nvme_attach_controller", 00:21:36.960 "req_id": 1 00:21:36.960 } 00:21:36.960 Got JSON-RPC error response 00:21:36.960 response: 00:21:36.960 { 00:21:36.960 "code": -5, 00:21:36.960 "message": "Input/output error" 00:21:36.960 } 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:36.960 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:37.217 request: 00:21:37.217 { 00:21:37.217 "name": "nvme0", 00:21:37.217 "trtype": "tcp", 00:21:37.217 "traddr": "10.0.0.2", 00:21:37.217 "adrfam": "ipv4", 00:21:37.217 "trsvcid": "4420", 00:21:37.217 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:37.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:37.218 "prchk_reftag": false, 00:21:37.218 "prchk_guard": false, 00:21:37.218 "hdgst": false, 00:21:37.218 "ddgst": false, 00:21:37.218 "dhchap_key": "key1", 00:21:37.218 "dhchap_ctrlr_key": "ckey2", 00:21:37.218 "allow_unrecognized_csi": false, 00:21:37.218 "method": "bdev_nvme_attach_controller", 00:21:37.218 "req_id": 1 00:21:37.218 } 00:21:37.218 Got JSON-RPC error response 00:21:37.218 response: 00:21:37.218 { 00:21:37.218 "code": -5, 00:21:37.218 "message": "Input/output error" 00:21:37.218 } 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.218 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.783 request: 00:21:37.783 { 00:21:37.783 "name": "nvme0", 00:21:37.783 "trtype": "tcp", 00:21:37.783 "traddr": "10.0.0.2", 00:21:37.783 "adrfam": "ipv4", 00:21:37.783 "trsvcid": "4420", 00:21:37.783 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:37.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:37.783 "prchk_reftag": false, 00:21:37.783 "prchk_guard": false, 00:21:37.783 "hdgst": false, 00:21:37.783 "ddgst": false, 00:21:37.783 "dhchap_key": "key1", 00:21:37.783 "dhchap_ctrlr_key": "ckey1", 00:21:37.783 "allow_unrecognized_csi": false, 00:21:37.783 "method": "bdev_nvme_attach_controller", 00:21:37.783 "req_id": 1 00:21:37.783 } 00:21:37.783 Got JSON-RPC error response 00:21:37.783 response: 00:21:37.783 { 00:21:37.783 "code": -5, 00:21:37.783 "message": "Input/output error" 00:21:37.783 } 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 647778 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 647778 ']' 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 647778 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 647778 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 647778' 00:21:37.783 killing process with pid 647778 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 647778 00:21:37.783 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 647778 00:21:38.042 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:38.042 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.042 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.042 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.042 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=669282 00:21:38.042 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 669282 00:21:38.042 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:38.042 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 669282 ']' 00:21:38.042 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.042 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.042 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.042 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.042 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 669282 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 669282 ']' 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.301 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.560 null0 00:21:38.560 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.560 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:38.560 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uTe 00:21:38.560 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.560 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.LuM ]] 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LuM 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.7Ak 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.xTD ]] 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xTD 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FHs 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.jD1 ]] 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jD1 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.XEq 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.560 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.496 nvme0n1 00:21:39.496 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.496 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.496 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.496 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.496 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.496 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.496 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.496 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.496 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.496 { 00:21:39.496 "cntlid": 1, 00:21:39.496 "qid": 0, 00:21:39.496 "state": "enabled", 00:21:39.496 "thread": "nvmf_tgt_poll_group_000", 00:21:39.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:39.496 "listen_address": { 00:21:39.496 "trtype": "TCP", 00:21:39.496 "adrfam": "IPv4", 00:21:39.496 "traddr": "10.0.0.2", 00:21:39.496 "trsvcid": "4420" 00:21:39.496 }, 00:21:39.496 "peer_address": { 00:21:39.496 "trtype": "TCP", 00:21:39.496 "adrfam": "IPv4", 00:21:39.496 "traddr": "10.0.0.1", 00:21:39.496 "trsvcid": "38372" 00:21:39.496 }, 00:21:39.496 "auth": { 00:21:39.496 "state": "completed", 00:21:39.496 "digest": "sha512", 00:21:39.497 "dhgroup": "ffdhe8192" 00:21:39.497 } 00:21:39.497 } 00:21:39.497 ]' 00:21:39.497 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.497 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.497 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.755 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.755 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.755 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.755 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.755 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.755 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:39.755 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:40.322 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.581 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:40.581 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.581 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.581 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.581 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:40.581 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.581 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.581 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.581 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:40.581 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:40.581 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:40.581 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:40.581 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:40.581 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:40.581 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.581 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:40.581 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.581 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.581 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.581 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.841 request: 00:21:40.841 { 00:21:40.841 "name": "nvme0", 00:21:40.841 "trtype": "tcp", 00:21:40.841 "traddr": "10.0.0.2", 00:21:40.841 "adrfam": "ipv4", 00:21:40.841 "trsvcid": "4420", 00:21:40.841 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:40.841 "prchk_reftag": false, 00:21:40.841 "prchk_guard": false, 00:21:40.841 "hdgst": false, 00:21:40.841 "ddgst": false, 00:21:40.841 "dhchap_key": "key3", 00:21:40.841 "allow_unrecognized_csi": false, 00:21:40.841 "method": "bdev_nvme_attach_controller", 00:21:40.841 "req_id": 1 00:21:40.841 } 00:21:40.841 Got JSON-RPC error response 00:21:40.841 response: 00:21:40.841 { 00:21:40.841 "code": -5, 00:21:40.841 "message": "Input/output error" 00:21:40.841 } 00:21:40.841 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:40.841 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.841 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.841 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.841 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:40.841 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:40.841 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:40.841 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:41.099 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:41.099 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:41.099 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:41.099 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:41.099 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.099 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:41.099 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.099 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:41.099 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.099 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.358 request: 00:21:41.358 { 00:21:41.358 "name": "nvme0", 00:21:41.358 "trtype": "tcp", 00:21:41.358 "traddr": "10.0.0.2", 00:21:41.358 "adrfam": "ipv4", 00:21:41.358 "trsvcid": "4420", 00:21:41.358 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:41.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:41.358 "prchk_reftag": false, 00:21:41.359 "prchk_guard": false, 00:21:41.359 "hdgst": false, 00:21:41.359 "ddgst": false, 00:21:41.359 "dhchap_key": "key3", 00:21:41.359 "allow_unrecognized_csi": false, 00:21:41.359 "method": "bdev_nvme_attach_controller", 00:21:41.359 "req_id": 1 00:21:41.359 } 00:21:41.359 Got JSON-RPC error response 00:21:41.359 response: 00:21:41.359 { 00:21:41.359 "code": -5, 00:21:41.359 "message": "Input/output error" 00:21:41.359 } 00:21:41.359 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:41.359 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:41.359 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:41.359 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:41.359 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:41.359 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:41.359 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:41.359 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:41.359 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:41.359 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.617 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.875 request: 00:21:41.875 { 00:21:41.875 "name": "nvme0", 00:21:41.875 "trtype": "tcp", 00:21:41.875 "traddr": "10.0.0.2", 00:21:41.875 "adrfam": "ipv4", 00:21:41.875 "trsvcid": "4420", 00:21:41.875 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:41.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:41.875 "prchk_reftag": false, 00:21:41.875 "prchk_guard": false, 00:21:41.875 "hdgst": false, 00:21:41.875 "ddgst": false, 00:21:41.875 "dhchap_key": "key0", 00:21:41.875 "dhchap_ctrlr_key": "key1", 00:21:41.875 "allow_unrecognized_csi": false, 00:21:41.875 "method": "bdev_nvme_attach_controller", 00:21:41.875 "req_id": 1 00:21:41.875 } 00:21:41.875 Got JSON-RPC error response 00:21:41.875 response: 00:21:41.875 { 00:21:41.875 "code": -5, 00:21:41.875 "message": "Input/output error" 00:21:41.875 } 00:21:41.875 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:41.875 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:41.875 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:41.875 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:41.875 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:41.875 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:41.875 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:42.132 nvme0n1 00:21:42.132 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:42.132 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:42.132 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.404 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.404 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.404 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.404 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:21:42.404 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.404 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.662 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.662 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:42.662 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:42.662 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:43.234 nvme0n1 00:21:43.234 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:43.234 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:43.234 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.492 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.492 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:43.492 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.492 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.492 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.492 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:43.492 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:43.492 13:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.751 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.751 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:43.751 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: --dhchap-ctrl-secret DHHC-1:03:MmQ0ODk2MzA3M2JhMTFjZmVmM2E4MGI0NTdiYzdhYzdiZmJjZGZkNThlYTllZjdjNWI2YmI3YTAzYmUxYTVmNE8QCr0=: 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:44.317 13:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:44.883 request: 00:21:44.883 { 00:21:44.883 "name": "nvme0", 00:21:44.883 "trtype": "tcp", 00:21:44.883 "traddr": "10.0.0.2", 00:21:44.883 "adrfam": "ipv4", 00:21:44.883 "trsvcid": "4420", 00:21:44.883 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:44.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:44.883 "prchk_reftag": false, 00:21:44.883 "prchk_guard": false, 00:21:44.883 "hdgst": false, 00:21:44.883 "ddgst": false, 00:21:44.884 "dhchap_key": "key1", 00:21:44.884 "allow_unrecognized_csi": false, 00:21:44.884 "method": "bdev_nvme_attach_controller", 00:21:44.884 "req_id": 1 00:21:44.884 } 00:21:44.884 Got JSON-RPC error response 00:21:44.884 response: 00:21:44.884 { 00:21:44.884 "code": -5, 00:21:44.884 "message": "Input/output error" 00:21:44.884 } 00:21:44.884 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:44.884 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.884 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.884 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.884 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:44.884 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:44.884 13:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:45.450 nvme0n1 00:21:45.451 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:45.451 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:45.451 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.708 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.708 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.708 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.966 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:45.966 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.966 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.966 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.966 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:45.966 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:45.966 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:46.223 nvme0n1 00:21:46.224 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:46.224 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:46.224 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.481 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.481 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.481 13:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.739 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:46.739 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: '' 2s 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: ]] 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTFjMWMyZjIwNDkwMjhhYjZmNzg4ZWU1YjJjMzc2MTD68gOW: 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:46.740 13:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:48.637 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:48.637 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:48.637 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:48.637 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:48.637 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:48.637 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:48.637 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:48.637 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:48.637 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.637 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.638 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.638 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: 2s 00:21:48.638 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:48.638 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:48.638 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:48.638 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: 00:21:48.638 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:48.638 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:48.638 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:48.638 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: ]] 00:21:48.638 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YjJmZTFkYmJhNTRkYmU4Y2U0MzRiZGZkMzVmMTlkZDlhZTQzNmNhZDZjYjdhNWE3Ml0X8g==: 00:21:48.638 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:48.638 13:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:51.166 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:51.166 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:51.166 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:51.166 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:51.166 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:51.166 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:51.166 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:51.166 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.167 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:51.167 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.167 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.167 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.167 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:51.167 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:51.167 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:51.423 nvme0n1 00:21:51.423 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:51.423 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.423 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.423 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.423 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:51.423 13:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:51.987 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:51.987 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:51.987 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.245 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.245 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:52.245 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.245 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.245 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.245 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:52.245 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:52.504 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:52.504 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:52.504 13:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:52.504 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:53.090 request: 00:21:53.090 { 00:21:53.090 "name": "nvme0", 00:21:53.090 "dhchap_key": "key1", 00:21:53.090 "dhchap_ctrlr_key": "key3", 00:21:53.090 "method": "bdev_nvme_set_keys", 00:21:53.090 "req_id": 1 00:21:53.090 } 00:21:53.090 Got JSON-RPC error response 00:21:53.090 response: 00:21:53.090 { 00:21:53.090 "code": -13, 00:21:53.090 "message": "Permission denied" 00:21:53.090 } 00:21:53.090 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:53.090 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.090 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.090 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.090 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:53.090 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:53.090 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.348 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:53.348 13:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:54.282 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:54.282 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:54.282 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.539 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:54.539 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:54.539 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.539 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.539 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.539 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:54.539 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:54.539 13:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:55.104 nvme0n1 00:21:55.105 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:55.105 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.105 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.105 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.105 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:55.105 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:55.105 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:55.105 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:55.105 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:55.105 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:55.105 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:55.105 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:55.105 13:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:55.672 request: 00:21:55.672 { 00:21:55.672 "name": "nvme0", 00:21:55.672 "dhchap_key": "key2", 00:21:55.672 "dhchap_ctrlr_key": "key0", 00:21:55.672 "method": "bdev_nvme_set_keys", 00:21:55.672 "req_id": 1 00:21:55.672 } 00:21:55.672 Got JSON-RPC error response 00:21:55.672 response: 00:21:55.672 { 00:21:55.672 "code": -13, 00:21:55.672 "message": "Permission denied" 00:21:55.672 } 00:21:55.672 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:55.672 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:55.672 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:55.672 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:55.672 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:55.672 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:55.672 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.931 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:55.931 13:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:56.863 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:56.863 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:56.863 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 647803 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 647803 ']' 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 647803 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 647803 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 647803' 00:21:57.122 killing process with pid 647803 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 647803 00:21:57.122 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 647803 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:57.382 rmmod nvme_tcp 00:21:57.382 rmmod nvme_fabrics 00:21:57.382 rmmod nvme_keyring 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 669282 ']' 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 669282 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 669282 ']' 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 669282 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.382 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 669282 00:21:57.641 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:57.641 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:57.641 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 669282' 00:21:57.641 killing process with pid 669282 00:21:57.641 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 669282 00:21:57.641 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 669282 00:21:57.641 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:57.641 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:57.641 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:57.641 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:57.641 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:57.641 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:57.641 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:57.641 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.641 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.641 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.641 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.641 13:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.175 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:00.175 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.uTe /tmp/spdk.key-sha256.7Ak /tmp/spdk.key-sha384.FHs /tmp/spdk.key-sha512.XEq /tmp/spdk.key-sha512.LuM /tmp/spdk.key-sha384.xTD /tmp/spdk.key-sha256.jD1 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:00.175 00:22:00.176 real 2m31.522s 00:22:00.176 user 5m49.143s 00:22:00.176 sys 0m24.133s 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.176 ************************************ 00:22:00.176 END TEST nvmf_auth_target 00:22:00.176 ************************************ 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:00.176 ************************************ 00:22:00.176 START TEST nvmf_bdevio_no_huge 00:22:00.176 ************************************ 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:00.176 * Looking for test storage... 00:22:00.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:00.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.176 --rc genhtml_branch_coverage=1 00:22:00.176 --rc genhtml_function_coverage=1 00:22:00.176 --rc genhtml_legend=1 00:22:00.176 --rc geninfo_all_blocks=1 00:22:00.176 --rc geninfo_unexecuted_blocks=1 00:22:00.176 00:22:00.176 ' 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:00.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.176 --rc genhtml_branch_coverage=1 00:22:00.176 --rc genhtml_function_coverage=1 00:22:00.176 --rc genhtml_legend=1 00:22:00.176 --rc geninfo_all_blocks=1 00:22:00.176 --rc geninfo_unexecuted_blocks=1 00:22:00.176 00:22:00.176 ' 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:00.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.176 --rc genhtml_branch_coverage=1 00:22:00.176 --rc genhtml_function_coverage=1 00:22:00.176 --rc genhtml_legend=1 00:22:00.176 --rc geninfo_all_blocks=1 00:22:00.176 --rc geninfo_unexecuted_blocks=1 00:22:00.176 00:22:00.176 ' 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:00.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.176 --rc genhtml_branch_coverage=1 00:22:00.176 --rc genhtml_function_coverage=1 00:22:00.176 --rc genhtml_legend=1 00:22:00.176 --rc geninfo_all_blocks=1 00:22:00.176 --rc geninfo_unexecuted_blocks=1 00:22:00.176 00:22:00.176 ' 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.176 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:00.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:00.177 13:53:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:06.843 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:06.843 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:06.843 Found net devices under 0000:86:00.0: cvl_0_0 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:06.843 Found net devices under 0000:86:00.1: cvl_0_1 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:06.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:22:06.843 00:22:06.843 --- 10.0.0.2 ping statistics --- 00:22:06.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.843 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:22:06.843 00:22:06.843 --- 10.0.0.1 ping statistics --- 00:22:06.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.843 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.843 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=676157 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 676157 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 676157 ']' 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.844 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.844 [2024-12-05 13:53:48.532135] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:06.844 [2024-12-05 13:53:48.532187] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:06.844 [2024-12-05 13:53:48.619807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.844 [2024-12-05 13:53:48.666211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.844 [2024-12-05 13:53:48.666245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.844 [2024-12-05 13:53:48.666251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.844 [2024-12-05 13:53:48.666257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.844 [2024-12-05 13:53:48.666262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.844 [2024-12-05 13:53:48.667417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:06.844 [2024-12-05 13:53:48.667528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:06.844 [2024-12-05 13:53:48.667636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.844 [2024-12-05 13:53:48.667636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:06.844 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.844 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:06.844 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:06.844 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.844 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.844 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.844 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:06.844 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.844 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.844 [2024-12-05 13:53:49.416404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:07.124 Malloc0 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:07.124 [2024-12-05 13:53:49.452680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.124 { 00:22:07.124 "params": { 00:22:07.124 "name": "Nvme$subsystem", 00:22:07.124 "trtype": "$TEST_TRANSPORT", 00:22:07.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.124 "adrfam": "ipv4", 00:22:07.124 "trsvcid": "$NVMF_PORT", 00:22:07.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.124 "hdgst": ${hdgst:-false}, 00:22:07.124 "ddgst": ${ddgst:-false} 00:22:07.124 }, 00:22:07.124 "method": "bdev_nvme_attach_controller" 00:22:07.124 } 00:22:07.124 EOF 00:22:07.124 )") 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:07.124 13:53:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:07.124 "params": { 00:22:07.124 "name": "Nvme1", 00:22:07.124 "trtype": "tcp", 00:22:07.124 "traddr": "10.0.0.2", 00:22:07.124 "adrfam": "ipv4", 00:22:07.124 "trsvcid": "4420", 00:22:07.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:07.124 "hdgst": false, 00:22:07.124 "ddgst": false 00:22:07.124 }, 00:22:07.124 "method": "bdev_nvme_attach_controller" 00:22:07.124 }' 00:22:07.124 [2024-12-05 13:53:49.503683] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:07.124 [2024-12-05 13:53:49.503729] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid676230 ] 00:22:07.124 [2024-12-05 13:53:49.582991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:07.124 [2024-12-05 13:53:49.631019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.124 [2024-12-05 13:53:49.631128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.124 [2024-12-05 13:53:49.631128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.381 I/O targets: 00:22:07.381 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:07.381 00:22:07.381 00:22:07.381 CUnit - A unit testing framework for C - Version 2.1-3 00:22:07.381 http://cunit.sourceforge.net/ 00:22:07.381 00:22:07.381 00:22:07.381 Suite: bdevio tests on: Nvme1n1 00:22:07.381 Test: blockdev write read block ...passed 00:22:07.381 Test: blockdev write zeroes read block ...passed 00:22:07.381 Test: blockdev write zeroes read no split ...passed 00:22:07.381 Test: blockdev write zeroes read split ...passed 00:22:07.638 Test: blockdev write zeroes read split partial ...passed 00:22:07.638 Test: blockdev reset ...[2024-12-05 13:53:50.005094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:07.638 [2024-12-05 13:53:50.005157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18828e0 (9): Bad file descriptor 00:22:07.638 [2024-12-05 13:53:50.058876] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:07.638 passed 00:22:07.638 Test: blockdev write read 8 blocks ...passed 00:22:07.638 Test: blockdev write read size > 128k ...passed 00:22:07.638 Test: blockdev write read invalid size ...passed 00:22:07.638 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:07.638 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:07.638 Test: blockdev write read max offset ...passed 00:22:07.638 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:07.894 Test: blockdev writev readv 8 blocks ...passed 00:22:07.895 Test: blockdev writev readv 30 x 1block ...passed 00:22:07.895 Test: blockdev writev readv block ...passed 00:22:07.895 Test: blockdev writev readv size > 128k ...passed 00:22:07.895 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:07.895 Test: blockdev comparev and writev ...[2024-12-05 13:53:50.273119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.895 [2024-12-05 13:53:50.273147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:07.895 [2024-12-05 13:53:50.273160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.895 [2024-12-05 13:53:50.273168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:07.895 [2024-12-05 13:53:50.273398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.895 [2024-12-05 13:53:50.273415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:07.895 [2024-12-05 13:53:50.273426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.895 [2024-12-05 13:53:50.273433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:07.895 [2024-12-05 13:53:50.273668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.895 [2024-12-05 13:53:50.273678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:07.895 [2024-12-05 13:53:50.273689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.895 [2024-12-05 13:53:50.273696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:07.895 [2024-12-05 13:53:50.273927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.895 [2024-12-05 13:53:50.273938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:07.895 [2024-12-05 13:53:50.273950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.895 [2024-12-05 13:53:50.273957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:07.895 passed 00:22:07.895 Test: blockdev nvme passthru rw ...passed 00:22:07.895 Test: blockdev nvme passthru vendor specific ...[2024-12-05 13:53:50.356724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.895 [2024-12-05 13:53:50.356740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:07.895 [2024-12-05 13:53:50.356847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.895 [2024-12-05 13:53:50.356857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:07.895 [2024-12-05 13:53:50.356958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.895 [2024-12-05 13:53:50.356967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:07.895 [2024-12-05 13:53:50.357064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.895 [2024-12-05 13:53:50.357073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:07.895 passed 00:22:07.895 Test: blockdev nvme admin passthru ...passed 00:22:07.895 Test: blockdev copy ...passed 00:22:07.895 00:22:07.895 Run Summary: Type Total Ran Passed Failed Inactive 00:22:07.895 suites 1 1 n/a 0 0 00:22:07.895 tests 23 23 23 0 0 00:22:07.895 asserts 152 152 152 0 n/a 00:22:07.895 00:22:07.895 Elapsed time = 1.166 seconds 00:22:08.152 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:08.153 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.153 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:08.153 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.153 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:08.153 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:08.153 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:08.153 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:08.153 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.153 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:08.153 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.153 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.153 rmmod nvme_tcp 00:22:08.153 rmmod nvme_fabrics 00:22:08.153 rmmod nvme_keyring 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 676157 ']' 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 676157 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 676157 ']' 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 676157 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 676157 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 676157' 00:22:08.412 killing process with pid 676157 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 676157 00:22:08.412 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 676157 00:22:08.670 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:08.670 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:08.670 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:08.670 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:08.670 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:08.670 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:08.670 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:08.670 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:08.670 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:08.670 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.670 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.670 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.201 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:11.201 00:22:11.201 real 0m10.888s 00:22:11.201 user 0m13.466s 00:22:11.201 sys 0m5.457s 00:22:11.201 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.201 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:11.201 ************************************ 00:22:11.201 END TEST nvmf_bdevio_no_huge 00:22:11.201 ************************************ 00:22:11.201 13:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:11.201 13:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:11.201 13:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.201 13:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:11.201 ************************************ 00:22:11.201 START TEST nvmf_tls 00:22:11.201 ************************************ 00:22:11.201 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:11.201 * Looking for test storage... 00:22:11.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:11.201 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:11.201 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:11.201 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:11.201 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:11.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.202 --rc genhtml_branch_coverage=1 00:22:11.202 --rc genhtml_function_coverage=1 00:22:11.202 --rc genhtml_legend=1 00:22:11.202 --rc geninfo_all_blocks=1 00:22:11.202 --rc geninfo_unexecuted_blocks=1 00:22:11.202 00:22:11.202 ' 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:11.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.202 --rc genhtml_branch_coverage=1 00:22:11.202 --rc genhtml_function_coverage=1 00:22:11.202 --rc genhtml_legend=1 00:22:11.202 --rc geninfo_all_blocks=1 00:22:11.202 --rc geninfo_unexecuted_blocks=1 00:22:11.202 00:22:11.202 ' 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:11.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.202 --rc genhtml_branch_coverage=1 00:22:11.202 --rc genhtml_function_coverage=1 00:22:11.202 --rc genhtml_legend=1 00:22:11.202 --rc geninfo_all_blocks=1 00:22:11.202 --rc geninfo_unexecuted_blocks=1 00:22:11.202 00:22:11.202 ' 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:11.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.202 --rc genhtml_branch_coverage=1 00:22:11.202 --rc genhtml_function_coverage=1 00:22:11.202 --rc genhtml_legend=1 00:22:11.202 --rc geninfo_all_blocks=1 00:22:11.202 --rc geninfo_unexecuted_blocks=1 00:22:11.202 00:22:11.202 ' 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:11.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:11.202 13:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:17.786 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:17.786 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:17.786 Found net devices under 0000:86:00.0: cvl_0_0 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:17.786 Found net devices under 0000:86:00.1: cvl_0_1 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:17.786 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:17.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:22:17.787 00:22:17.787 --- 10.0.0.2 ping statistics --- 00:22:17.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.787 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:22:17.787 00:22:17.787 --- 10.0.0.1 ping statistics --- 00:22:17.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.787 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=679974 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 679974 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 679974 ']' 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.787 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.787 [2024-12-05 13:53:59.497343] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:17.787 [2024-12-05 13:53:59.497396] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.787 [2024-12-05 13:53:59.579247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.787 [2024-12-05 13:53:59.621436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.787 [2024-12-05 13:53:59.621469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.787 [2024-12-05 13:53:59.621476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.787 [2024-12-05 13:53:59.621483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.787 [2024-12-05 13:53:59.621488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.787 [2024-12-05 13:53:59.622028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.787 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.787 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:17.787 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:17.787 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:17.787 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.046 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.046 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:18.046 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:18.046 true 00:22:18.046 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:18.046 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:18.303 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:18.303 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:18.303 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:18.560 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:18.560 13:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:18.817 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:18.817 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:18.817 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:18.817 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:18.817 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:19.074 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:19.074 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:19.074 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:19.074 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:19.332 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:19.332 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:19.332 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:19.332 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:19.332 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:19.591 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:19.591 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:19.591 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:19.850 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:19.850 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.0nYKlOGXPB 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.f1y5NBTPak 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.0nYKlOGXPB 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.f1y5NBTPak 00:22:20.109 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:20.368 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:20.626 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.0nYKlOGXPB 00:22:20.626 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0nYKlOGXPB 00:22:20.626 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:20.626 [2024-12-05 13:54:03.183141] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.626 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:20.884 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:21.142 [2024-12-05 13:54:03.552095] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:21.142 [2024-12-05 13:54:03.552318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.142 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:21.401 malloc0 00:22:21.401 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:21.401 13:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0nYKlOGXPB 00:22:21.661 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:21.919 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.0nYKlOGXPB 00:22:31.887 Initializing NVMe Controllers 00:22:31.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:31.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:31.887 Initialization complete. Launching workers. 00:22:31.887 ======================================================== 00:22:31.887 Latency(us) 00:22:31.887 Device Information : IOPS MiB/s Average min max 00:22:31.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16908.77 66.05 3785.11 845.81 5957.46 00:22:31.887 ======================================================== 00:22:31.887 Total : 16908.77 66.05 3785.11 845.81 5957.46 00:22:31.887 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0nYKlOGXPB 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0nYKlOGXPB 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=683054 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 683054 /var/tmp/bdevperf.sock 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 683054 ']' 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.887 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.887 [2024-12-05 13:54:14.468541] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:31.887 [2024-12-05 13:54:14.468588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683054 ] 00:22:32.145 [2024-12-05 13:54:14.542919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.145 [2024-12-05 13:54:14.584890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.145 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.145 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:32.145 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0nYKlOGXPB 00:22:32.404 13:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:32.661 [2024-12-05 13:54:15.035927] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.661 TLSTESTn1 00:22:32.661 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:32.661 Running I/O for 10 seconds... 00:22:34.969 5349.00 IOPS, 20.89 MiB/s [2024-12-05T12:54:18.490Z] 5489.50 IOPS, 21.44 MiB/s [2024-12-05T12:54:19.437Z] 5532.33 IOPS, 21.61 MiB/s [2024-12-05T12:54:20.374Z] 5572.75 IOPS, 21.77 MiB/s [2024-12-05T12:54:21.311Z] 5560.60 IOPS, 21.72 MiB/s [2024-12-05T12:54:22.247Z] 5565.67 IOPS, 21.74 MiB/s [2024-12-05T12:54:23.623Z] 5571.29 IOPS, 21.76 MiB/s [2024-12-05T12:54:24.557Z] 5583.88 IOPS, 21.81 MiB/s [2024-12-05T12:54:25.492Z] 5598.22 IOPS, 21.87 MiB/s [2024-12-05T12:54:25.492Z] 5534.70 IOPS, 21.62 MiB/s 00:22:42.905 Latency(us) 00:22:42.905 [2024-12-05T12:54:25.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.905 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:42.905 Verification LBA range: start 0x0 length 0x2000 00:22:42.905 TLSTESTn1 : 10.02 5538.34 21.63 0.00 0.00 23076.36 6397.56 29709.65 00:22:42.905 [2024-12-05T12:54:25.492Z] =================================================================================================================== 00:22:42.905 [2024-12-05T12:54:25.492Z] Total : 5538.34 21.63 0.00 0.00 23076.36 6397.56 29709.65 00:22:42.905 { 00:22:42.905 "results": [ 00:22:42.905 { 00:22:42.905 "job": "TLSTESTn1", 00:22:42.905 "core_mask": "0x4", 00:22:42.905 "workload": "verify", 00:22:42.905 "status": "finished", 00:22:42.905 "verify_range": { 00:22:42.905 "start": 0, 00:22:42.905 "length": 8192 00:22:42.905 }, 00:22:42.905 "queue_depth": 128, 00:22:42.905 "io_size": 4096, 00:22:42.905 "runtime": 10.015809, 00:22:42.905 "iops": 5538.344431288576, 00:22:42.905 "mibps": 21.634157934721, 00:22:42.905 "io_failed": 0, 00:22:42.905 "io_timeout": 0, 00:22:42.905 "avg_latency_us": 23076.36225809969, 00:22:42.905 "min_latency_us": 6397.561904761905, 00:22:42.905 "max_latency_us": 29709.653333333332 00:22:42.905 } 00:22:42.905 ], 00:22:42.905 "core_count": 1 00:22:42.905 } 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 683054 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 683054 ']' 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 683054 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 683054 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 683054' 00:22:42.905 killing process with pid 683054 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 683054 00:22:42.905 Received shutdown signal, test time was about 10.000000 seconds 00:22:42.905 00:22:42.905 Latency(us) 00:22:42.905 [2024-12-05T12:54:25.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.905 [2024-12-05T12:54:25.492Z] =================================================================================================================== 00:22:42.905 [2024-12-05T12:54:25.492Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 683054 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f1y5NBTPak 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f1y5NBTPak 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f1y5NBTPak 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.f1y5NBTPak 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=684890 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 684890 /var/tmp/bdevperf.sock 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 684890 ']' 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.905 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.906 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.906 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.906 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.163 [2024-12-05 13:54:25.521601] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:43.163 [2024-12-05 13:54:25.521649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid684890 ] 00:22:43.163 [2024-12-05 13:54:25.589863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.163 [2024-12-05 13:54:25.631355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.163 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.163 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:43.163 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f1y5NBTPak 00:22:43.421 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:43.679 [2024-12-05 13:54:26.086452] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.679 [2024-12-05 13:54:26.091198] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:43.679 [2024-12-05 13:54:26.091800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb71a0 (107): Transport endpoint is not connected 00:22:43.679 [2024-12-05 13:54:26.092792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb71a0 (9): Bad file descriptor 00:22:43.679 [2024-12-05 13:54:26.093793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:43.679 [2024-12-05 13:54:26.093805] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:43.679 [2024-12-05 13:54:26.093812] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:43.679 [2024-12-05 13:54:26.093820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:43.679 request: 00:22:43.679 { 00:22:43.679 "name": "TLSTEST", 00:22:43.679 "trtype": "tcp", 00:22:43.679 "traddr": "10.0.0.2", 00:22:43.679 "adrfam": "ipv4", 00:22:43.679 "trsvcid": "4420", 00:22:43.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.679 "prchk_reftag": false, 00:22:43.679 "prchk_guard": false, 00:22:43.679 "hdgst": false, 00:22:43.679 "ddgst": false, 00:22:43.680 "psk": "key0", 00:22:43.680 "allow_unrecognized_csi": false, 00:22:43.680 "method": "bdev_nvme_attach_controller", 00:22:43.680 "req_id": 1 00:22:43.680 } 00:22:43.680 Got JSON-RPC error response 00:22:43.680 response: 00:22:43.680 { 00:22:43.680 "code": -5, 00:22:43.680 "message": "Input/output error" 00:22:43.680 } 00:22:43.680 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 684890 00:22:43.680 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 684890 ']' 00:22:43.680 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 684890 00:22:43.680 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:43.680 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.680 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 684890 00:22:43.680 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:43.680 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:43.680 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 684890' 00:22:43.680 killing process with pid 684890 00:22:43.680 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 684890 00:22:43.680 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.680 00:22:43.680 Latency(us) 00:22:43.680 [2024-12-05T12:54:26.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.680 [2024-12-05T12:54:26.267Z] =================================================================================================================== 00:22:43.680 [2024-12-05T12:54:26.267Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:43.680 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 684890 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0nYKlOGXPB 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0nYKlOGXPB 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0nYKlOGXPB 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0nYKlOGXPB 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=684907 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 684907 /var/tmp/bdevperf.sock 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 684907 ']' 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.938 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.938 [2024-12-05 13:54:26.366961] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:43.938 [2024-12-05 13:54:26.367012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid684907 ] 00:22:43.938 [2024-12-05 13:54:26.433073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.938 [2024-12-05 13:54:26.471793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.197 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.197 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:44.197 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0nYKlOGXPB 00:22:44.197 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:44.456 [2024-12-05 13:54:26.931570] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:44.456 [2024-12-05 13:54:26.941890] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:44.456 [2024-12-05 13:54:26.941913] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:44.456 [2024-12-05 13:54:26.941938] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:44.456 [2024-12-05 13:54:26.941958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184d1a0 (107): Transport endpoint is not connected 00:22:44.456 [2024-12-05 13:54:26.942943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184d1a0 (9): Bad file descriptor 00:22:44.456 [2024-12-05 13:54:26.943945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:44.456 [2024-12-05 13:54:26.943958] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:44.456 [2024-12-05 13:54:26.943965] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:44.456 [2024-12-05 13:54:26.943973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:44.456 request: 00:22:44.456 { 00:22:44.456 "name": "TLSTEST", 00:22:44.456 "trtype": "tcp", 00:22:44.456 "traddr": "10.0.0.2", 00:22:44.456 "adrfam": "ipv4", 00:22:44.456 "trsvcid": "4420", 00:22:44.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.456 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:44.456 "prchk_reftag": false, 00:22:44.456 "prchk_guard": false, 00:22:44.456 "hdgst": false, 00:22:44.456 "ddgst": false, 00:22:44.456 "psk": "key0", 00:22:44.456 "allow_unrecognized_csi": false, 00:22:44.456 "method": "bdev_nvme_attach_controller", 00:22:44.456 "req_id": 1 00:22:44.456 } 00:22:44.456 Got JSON-RPC error response 00:22:44.456 response: 00:22:44.456 { 00:22:44.456 "code": -5, 00:22:44.456 "message": "Input/output error" 00:22:44.456 } 00:22:44.456 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 684907 00:22:44.456 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 684907 ']' 00:22:44.456 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 684907 00:22:44.456 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:44.456 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.456 13:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 684907 00:22:44.456 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:44.456 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:44.456 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 684907' 00:22:44.456 killing process with pid 684907 00:22:44.456 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 684907 00:22:44.456 Received shutdown signal, test time was about 10.000000 seconds 00:22:44.456 00:22:44.456 Latency(us) 00:22:44.456 [2024-12-05T12:54:27.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.456 [2024-12-05T12:54:27.043Z] =================================================================================================================== 00:22:44.456 [2024-12-05T12:54:27.043Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:44.456 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 684907 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0nYKlOGXPB 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0nYKlOGXPB 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0nYKlOGXPB 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0nYKlOGXPB 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=685143 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 685143 /var/tmp/bdevperf.sock 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 685143 ']' 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.715 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.715 [2024-12-05 13:54:27.226629] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:44.715 [2024-12-05 13:54:27.226680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid685143 ] 00:22:44.715 [2024-12-05 13:54:27.290010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.975 [2024-12-05 13:54:27.327047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.975 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.975 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:44.975 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0nYKlOGXPB 00:22:45.234 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:45.234 [2024-12-05 13:54:27.785960] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:45.234 [2024-12-05 13:54:27.794019] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:45.234 [2024-12-05 13:54:27.794046] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:45.234 [2024-12-05 13:54:27.794072] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:45.234 [2024-12-05 13:54:27.794308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18721a0 (107): Transport endpoint is not connected 00:22:45.234 [2024-12-05 13:54:27.795302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18721a0 (9): Bad file descriptor 00:22:45.234 [2024-12-05 13:54:27.796304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:45.234 [2024-12-05 13:54:27.796316] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:45.234 [2024-12-05 13:54:27.796323] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:45.234 [2024-12-05 13:54:27.796330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:45.234 request: 00:22:45.234 { 00:22:45.234 "name": "TLSTEST", 00:22:45.234 "trtype": "tcp", 00:22:45.234 "traddr": "10.0.0.2", 00:22:45.234 "adrfam": "ipv4", 00:22:45.234 "trsvcid": "4420", 00:22:45.234 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:45.234 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.234 "prchk_reftag": false, 00:22:45.234 "prchk_guard": false, 00:22:45.234 "hdgst": false, 00:22:45.234 "ddgst": false, 00:22:45.234 "psk": "key0", 00:22:45.234 "allow_unrecognized_csi": false, 00:22:45.234 "method": "bdev_nvme_attach_controller", 00:22:45.234 "req_id": 1 00:22:45.234 } 00:22:45.234 Got JSON-RPC error response 00:22:45.234 response: 00:22:45.234 { 00:22:45.234 "code": -5, 00:22:45.234 "message": "Input/output error" 00:22:45.234 } 00:22:45.492 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 685143 00:22:45.493 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 685143 ']' 00:22:45.493 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 685143 00:22:45.493 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:45.493 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.493 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 685143 00:22:45.493 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:45.493 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:45.493 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 685143' 00:22:45.493 killing process with pid 685143 00:22:45.493 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 685143 00:22:45.493 Received shutdown signal, test time was about 10.000000 seconds 00:22:45.493 00:22:45.493 Latency(us) 00:22:45.493 [2024-12-05T12:54:28.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.493 [2024-12-05T12:54:28.080Z] =================================================================================================================== 00:22:45.493 [2024-12-05T12:54:28.080Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:45.493 13:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 685143 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=685284 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 685284 /var/tmp/bdevperf.sock 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 685284 ']' 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.493 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.752 [2024-12-05 13:54:28.077940] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:45.752 [2024-12-05 13:54:28.077990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid685284 ] 00:22:45.752 [2024-12-05 13:54:28.151791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.752 [2024-12-05 13:54:28.191552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.752 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.752 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:45.752 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:46.010 [2024-12-05 13:54:28.446734] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:46.010 [2024-12-05 13:54:28.446765] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:46.010 request: 00:22:46.010 { 00:22:46.010 "name": "key0", 00:22:46.010 "path": "", 00:22:46.010 "method": "keyring_file_add_key", 00:22:46.010 "req_id": 1 00:22:46.010 } 00:22:46.010 Got JSON-RPC error response 00:22:46.010 response: 00:22:46.010 { 00:22:46.010 "code": -1, 00:22:46.010 "message": "Operation not permitted" 00:22:46.010 } 00:22:46.010 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:46.268 [2024-12-05 13:54:28.635304] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.268 [2024-12-05 13:54:28.635332] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:46.268 request: 00:22:46.268 { 00:22:46.268 "name": "TLSTEST", 00:22:46.269 "trtype": "tcp", 00:22:46.269 "traddr": "10.0.0.2", 00:22:46.269 "adrfam": "ipv4", 00:22:46.269 "trsvcid": "4420", 00:22:46.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.269 "prchk_reftag": false, 00:22:46.269 "prchk_guard": false, 00:22:46.269 "hdgst": false, 00:22:46.269 "ddgst": false, 00:22:46.269 "psk": "key0", 00:22:46.269 "allow_unrecognized_csi": false, 00:22:46.269 "method": "bdev_nvme_attach_controller", 00:22:46.269 "req_id": 1 00:22:46.269 } 00:22:46.269 Got JSON-RPC error response 00:22:46.269 response: 00:22:46.269 { 00:22:46.269 "code": -126, 00:22:46.269 "message": "Required key not available" 00:22:46.269 } 00:22:46.269 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 685284 00:22:46.269 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 685284 ']' 00:22:46.269 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 685284 00:22:46.269 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:46.269 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.269 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 685284 00:22:46.269 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:46.269 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:46.269 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 685284' 00:22:46.269 killing process with pid 685284 00:22:46.269 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 685284 00:22:46.269 Received shutdown signal, test time was about 10.000000 seconds 00:22:46.269 00:22:46.269 Latency(us) 00:22:46.269 [2024-12-05T12:54:28.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.269 [2024-12-05T12:54:28.856Z] =================================================================================================================== 00:22:46.269 [2024-12-05T12:54:28.856Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:46.269 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 685284 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 679974 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 679974 ']' 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 679974 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 679974 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 679974' 00:22:46.528 killing process with pid 679974 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 679974 00:22:46.528 13:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 679974 00:22:46.528 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:46.528 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:46.528 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:46.528 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:46.528 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:46.528 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:46.528 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:46.787 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ajB8RVuSSK 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ajB8RVuSSK 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=685402 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 685402 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 685402 ']' 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.788 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.788 [2024-12-05 13:54:29.192694] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:46.788 [2024-12-05 13:54:29.192741] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.788 [2024-12-05 13:54:29.269429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.788 [2024-12-05 13:54:29.309530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.788 [2024-12-05 13:54:29.309566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.788 [2024-12-05 13:54:29.309573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.788 [2024-12-05 13:54:29.309579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.788 [2024-12-05 13:54:29.309587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.788 [2024-12-05 13:54:29.310123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.047 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.047 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:47.047 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:47.047 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:47.047 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.047 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.047 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ajB8RVuSSK 00:22:47.047 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ajB8RVuSSK 00:22:47.047 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:47.047 [2024-12-05 13:54:29.610003] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.047 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:47.306 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:47.565 [2024-12-05 13:54:29.970933] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:47.565 [2024-12-05 13:54:29.971152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.565 13:54:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:47.823 malloc0 00:22:47.823 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:47.823 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ajB8RVuSSK 00:22:48.082 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ajB8RVuSSK 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ajB8RVuSSK 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=685719 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 685719 /var/tmp/bdevperf.sock 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 685719 ']' 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.341 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.341 [2024-12-05 13:54:30.768285] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:48.341 [2024-12-05 13:54:30.768333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid685719 ] 00:22:48.341 [2024-12-05 13:54:30.843259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.341 [2024-12-05 13:54:30.884695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.600 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.600 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:48.600 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ajB8RVuSSK 00:22:48.600 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:48.858 [2024-12-05 13:54:31.319992] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.858 TLSTESTn1 00:22:48.858 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:49.115 Running I/O for 10 seconds... 00:22:51.014 5321.00 IOPS, 20.79 MiB/s [2024-12-05T12:54:34.534Z] 5473.00 IOPS, 21.38 MiB/s [2024-12-05T12:54:35.909Z] 5513.33 IOPS, 21.54 MiB/s [2024-12-05T12:54:36.842Z] 5539.50 IOPS, 21.64 MiB/s [2024-12-05T12:54:37.776Z] 5527.60 IOPS, 21.59 MiB/s [2024-12-05T12:54:38.711Z] 5540.83 IOPS, 21.64 MiB/s [2024-12-05T12:54:39.646Z] 5556.29 IOPS, 21.70 MiB/s [2024-12-05T12:54:40.580Z] 5550.88 IOPS, 21.68 MiB/s [2024-12-05T12:54:41.535Z] 5554.56 IOPS, 21.70 MiB/s [2024-12-05T12:54:41.535Z] 5554.00 IOPS, 21.70 MiB/s 00:22:58.948 Latency(us) 00:22:58.948 [2024-12-05T12:54:41.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.948 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:58.948 Verification LBA range: start 0x0 length 0x2000 00:22:58.948 TLSTESTn1 : 10.02 5557.23 21.71 0.00 0.00 22998.24 6147.90 34453.21 00:22:58.948 [2024-12-05T12:54:41.535Z] =================================================================================================================== 00:22:58.948 [2024-12-05T12:54:41.535Z] Total : 5557.23 21.71 0.00 0.00 22998.24 6147.90 34453.21 00:22:58.948 { 00:22:58.948 "results": [ 00:22:58.948 { 00:22:58.948 "job": "TLSTESTn1", 00:22:58.948 "core_mask": "0x4", 00:22:58.948 "workload": "verify", 00:22:58.948 "status": "finished", 00:22:58.948 "verify_range": { 00:22:58.948 "start": 0, 00:22:58.948 "length": 8192 00:22:58.948 }, 00:22:58.948 "queue_depth": 128, 00:22:58.948 "io_size": 4096, 00:22:58.948 "runtime": 10.016854, 00:22:58.948 "iops": 5557.233838089284, 00:22:58.948 "mibps": 21.707944680036267, 00:22:58.948 "io_failed": 0, 00:22:58.948 "io_timeout": 0, 00:22:58.948 "avg_latency_us": 22998.24142810949, 00:22:58.948 "min_latency_us": 6147.900952380953, 00:22:58.948 "max_latency_us": 34453.21142857143 00:22:58.948 } 00:22:58.948 ], 00:22:58.948 "core_count": 1 00:22:58.948 } 00:22:59.206 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 685719 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 685719 ']' 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 685719 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 685719 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 685719' 00:22:59.207 killing process with pid 685719 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 685719 00:22:59.207 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.207 00:22:59.207 Latency(us) 00:22:59.207 [2024-12-05T12:54:41.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.207 [2024-12-05T12:54:41.794Z] =================================================================================================================== 00:22:59.207 [2024-12-05T12:54:41.794Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 685719 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ajB8RVuSSK 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ajB8RVuSSK 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ajB8RVuSSK 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ajB8RVuSSK 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ajB8RVuSSK 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=687500 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 687500 /var/tmp/bdevperf.sock 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 687500 ']' 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.207 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.465 [2024-12-05 13:54:41.817600] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:59.465 [2024-12-05 13:54:41.817649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687500 ] 00:22:59.465 [2024-12-05 13:54:41.884102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.465 [2024-12-05 13:54:41.925416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.465 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.465 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:59.465 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ajB8RVuSSK 00:22:59.723 [2024-12-05 13:54:42.187900] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ajB8RVuSSK': 0100666 00:22:59.723 [2024-12-05 13:54:42.187926] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:59.723 request: 00:22:59.723 { 00:22:59.723 "name": "key0", 00:22:59.723 "path": "/tmp/tmp.ajB8RVuSSK", 00:22:59.723 "method": "keyring_file_add_key", 00:22:59.723 "req_id": 1 00:22:59.723 } 00:22:59.723 Got JSON-RPC error response 00:22:59.723 response: 00:22:59.723 { 00:22:59.723 "code": -1, 00:22:59.723 "message": "Operation not permitted" 00:22:59.723 } 00:22:59.723 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:59.982 [2024-12-05 13:54:42.376466] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.982 [2024-12-05 13:54:42.376499] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:59.982 request: 00:22:59.982 { 00:22:59.982 "name": "TLSTEST", 00:22:59.982 "trtype": "tcp", 00:22:59.982 "traddr": "10.0.0.2", 00:22:59.982 "adrfam": "ipv4", 00:22:59.982 "trsvcid": "4420", 00:22:59.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.982 "prchk_reftag": false, 00:22:59.982 "prchk_guard": false, 00:22:59.982 "hdgst": false, 00:22:59.982 "ddgst": false, 00:22:59.982 "psk": "key0", 00:22:59.982 "allow_unrecognized_csi": false, 00:22:59.982 "method": "bdev_nvme_attach_controller", 00:22:59.982 "req_id": 1 00:22:59.982 } 00:22:59.982 Got JSON-RPC error response 00:22:59.982 response: 00:22:59.982 { 00:22:59.982 "code": -126, 00:22:59.982 "message": "Required key not available" 00:22:59.982 } 00:22:59.982 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 687500 00:22:59.982 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 687500 ']' 00:22:59.982 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 687500 00:22:59.982 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:59.982 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.982 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 687500 00:22:59.982 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:59.982 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:59.982 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 687500' 00:22:59.982 killing process with pid 687500 00:22:59.982 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 687500 00:22:59.982 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.982 00:22:59.982 Latency(us) 00:22:59.982 [2024-12-05T12:54:42.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.982 [2024-12-05T12:54:42.569Z] =================================================================================================================== 00:22:59.982 [2024-12-05T12:54:42.569Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:59.982 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 687500 00:23:00.240 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:00.240 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:00.240 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.240 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.240 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.240 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 685402 00:23:00.240 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 685402 ']' 00:23:00.240 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 685402 00:23:00.240 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:00.240 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.240 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 685402 00:23:00.240 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 685402' 00:23:00.241 killing process with pid 685402 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 685402 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 685402 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=687738 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 687738 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 687738 ']' 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.241 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.499 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.500 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.500 [2024-12-05 13:54:42.875854] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:00.500 [2024-12-05 13:54:42.875902] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.500 [2024-12-05 13:54:42.954134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.500 [2024-12-05 13:54:42.989480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.500 [2024-12-05 13:54:42.989516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.500 [2024-12-05 13:54:42.989522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.500 [2024-12-05 13:54:42.989529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.500 [2024-12-05 13:54:42.989533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.500 [2024-12-05 13:54:42.990118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ajB8RVuSSK 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ajB8RVuSSK 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.ajB8RVuSSK 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ajB8RVuSSK 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:00.759 [2024-12-05 13:54:43.297968] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.759 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:01.017 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:01.275 [2024-12-05 13:54:43.711027] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:01.275 [2024-12-05 13:54:43.711243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.275 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:01.534 malloc0 00:23:01.534 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:01.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ajB8RVuSSK 00:23:01.793 [2024-12-05 13:54:44.292550] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ajB8RVuSSK': 0100666 00:23:01.793 [2024-12-05 13:54:44.292581] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:01.793 request: 00:23:01.793 { 00:23:01.793 "name": "key0", 00:23:01.793 "path": "/tmp/tmp.ajB8RVuSSK", 00:23:01.793 "method": "keyring_file_add_key", 00:23:01.793 "req_id": 1 00:23:01.793 } 00:23:01.793 Got JSON-RPC error response 00:23:01.793 response: 00:23:01.793 { 00:23:01.793 "code": -1, 00:23:01.793 "message": "Operation not permitted" 00:23:01.793 } 00:23:01.793 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:02.052 [2024-12-05 13:54:44.481056] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:02.052 [2024-12-05 13:54:44.481085] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:02.052 request: 00:23:02.052 { 00:23:02.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.052 "host": "nqn.2016-06.io.spdk:host1", 00:23:02.052 "psk": "key0", 00:23:02.052 "method": "nvmf_subsystem_add_host", 00:23:02.052 "req_id": 1 00:23:02.052 } 00:23:02.052 Got JSON-RPC error response 00:23:02.052 response: 00:23:02.052 { 00:23:02.052 "code": -32603, 00:23:02.052 "message": "Internal error" 00:23:02.052 } 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 687738 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 687738 ']' 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 687738 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 687738 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 687738' 00:23:02.052 killing process with pid 687738 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 687738 00:23:02.052 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 687738 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ajB8RVuSSK 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=688006 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 688006 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 688006 ']' 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.311 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.311 [2024-12-05 13:54:44.789573] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:02.311 [2024-12-05 13:54:44.789619] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.311 [2024-12-05 13:54:44.868140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.573 [2024-12-05 13:54:44.907119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.573 [2024-12-05 13:54:44.907152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.573 [2024-12-05 13:54:44.907160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.573 [2024-12-05 13:54:44.907166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.574 [2024-12-05 13:54:44.907170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.574 [2024-12-05 13:54:44.907762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.574 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.574 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:02.574 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:02.574 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:02.574 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.574 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.574 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ajB8RVuSSK 00:23:02.574 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ajB8RVuSSK 00:23:02.574 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:02.834 [2024-12-05 13:54:45.220246] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.834 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:03.092 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:03.092 [2024-12-05 13:54:45.605229] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:03.092 [2024-12-05 13:54:45.605478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.092 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:03.350 malloc0 00:23:03.350 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:03.609 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ajB8RVuSSK 00:23:03.609 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:03.867 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:03.867 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=688340 00:23:03.867 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:03.867 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 688340 /var/tmp/bdevperf.sock 00:23:03.867 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 688340 ']' 00:23:03.867 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.867 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.867 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.867 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.867 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.867 [2024-12-05 13:54:46.415504] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:03.867 [2024-12-05 13:54:46.415554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688340 ] 00:23:04.125 [2024-12-05 13:54:46.491966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.125 [2024-12-05 13:54:46.531754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.125 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.125 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:04.125 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ajB8RVuSSK 00:23:04.384 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:04.643 [2024-12-05 13:54:46.991154] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.643 TLSTESTn1 00:23:04.643 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:04.902 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:04.902 "subsystems": [ 00:23:04.902 { 00:23:04.902 "subsystem": "keyring", 00:23:04.902 "config": [ 00:23:04.902 { 00:23:04.902 "method": "keyring_file_add_key", 00:23:04.902 "params": { 00:23:04.902 "name": "key0", 00:23:04.902 "path": "/tmp/tmp.ajB8RVuSSK" 00:23:04.902 } 00:23:04.902 } 00:23:04.902 ] 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "subsystem": "iobuf", 00:23:04.902 "config": [ 00:23:04.902 { 00:23:04.902 "method": "iobuf_set_options", 00:23:04.902 "params": { 00:23:04.902 "small_pool_count": 8192, 00:23:04.902 "large_pool_count": 1024, 00:23:04.902 "small_bufsize": 8192, 00:23:04.902 "large_bufsize": 135168, 00:23:04.902 "enable_numa": false 00:23:04.902 } 00:23:04.902 } 00:23:04.902 ] 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "subsystem": "sock", 00:23:04.902 "config": [ 00:23:04.902 { 00:23:04.902 "method": "sock_set_default_impl", 00:23:04.902 "params": { 00:23:04.902 "impl_name": "posix" 00:23:04.902 } 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "method": "sock_impl_set_options", 00:23:04.902 "params": { 00:23:04.902 "impl_name": "ssl", 00:23:04.902 "recv_buf_size": 4096, 00:23:04.902 "send_buf_size": 4096, 00:23:04.902 "enable_recv_pipe": true, 00:23:04.902 "enable_quickack": false, 00:23:04.902 "enable_placement_id": 0, 00:23:04.902 "enable_zerocopy_send_server": true, 00:23:04.902 "enable_zerocopy_send_client": false, 00:23:04.902 "zerocopy_threshold": 0, 00:23:04.902 "tls_version": 0, 00:23:04.902 "enable_ktls": false 00:23:04.902 } 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "method": "sock_impl_set_options", 00:23:04.902 "params": { 00:23:04.902 "impl_name": "posix", 00:23:04.902 "recv_buf_size": 2097152, 00:23:04.902 "send_buf_size": 2097152, 00:23:04.902 "enable_recv_pipe": true, 00:23:04.902 "enable_quickack": false, 00:23:04.902 "enable_placement_id": 0, 00:23:04.902 "enable_zerocopy_send_server": true, 00:23:04.902 "enable_zerocopy_send_client": false, 00:23:04.902 "zerocopy_threshold": 0, 00:23:04.902 "tls_version": 0, 00:23:04.902 "enable_ktls": false 00:23:04.902 } 00:23:04.902 } 00:23:04.902 ] 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "subsystem": "vmd", 00:23:04.902 "config": [] 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "subsystem": "accel", 00:23:04.902 "config": [ 00:23:04.902 { 00:23:04.902 "method": "accel_set_options", 00:23:04.902 "params": { 00:23:04.902 "small_cache_size": 128, 00:23:04.902 "large_cache_size": 16, 00:23:04.902 "task_count": 2048, 00:23:04.902 "sequence_count": 2048, 00:23:04.902 "buf_count": 2048 00:23:04.902 } 00:23:04.902 } 00:23:04.902 ] 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "subsystem": "bdev", 00:23:04.902 "config": [ 00:23:04.902 { 00:23:04.902 "method": "bdev_set_options", 00:23:04.902 "params": { 00:23:04.902 "bdev_io_pool_size": 65535, 00:23:04.902 "bdev_io_cache_size": 256, 00:23:04.902 "bdev_auto_examine": true, 00:23:04.902 "iobuf_small_cache_size": 128, 00:23:04.902 "iobuf_large_cache_size": 16 00:23:04.902 } 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "method": "bdev_raid_set_options", 00:23:04.902 "params": { 00:23:04.902 "process_window_size_kb": 1024, 00:23:04.902 "process_max_bandwidth_mb_sec": 0 00:23:04.902 } 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "method": "bdev_iscsi_set_options", 00:23:04.902 "params": { 00:23:04.902 "timeout_sec": 30 00:23:04.902 } 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "method": "bdev_nvme_set_options", 00:23:04.902 "params": { 00:23:04.902 "action_on_timeout": "none", 00:23:04.902 "timeout_us": 0, 00:23:04.902 "timeout_admin_us": 0, 00:23:04.902 "keep_alive_timeout_ms": 10000, 00:23:04.902 "arbitration_burst": 0, 00:23:04.902 "low_priority_weight": 0, 00:23:04.902 "medium_priority_weight": 0, 00:23:04.902 "high_priority_weight": 0, 00:23:04.902 "nvme_adminq_poll_period_us": 10000, 00:23:04.902 "nvme_ioq_poll_period_us": 0, 00:23:04.902 "io_queue_requests": 0, 00:23:04.902 "delay_cmd_submit": true, 00:23:04.902 "transport_retry_count": 4, 00:23:04.902 "bdev_retry_count": 3, 00:23:04.902 "transport_ack_timeout": 0, 00:23:04.902 "ctrlr_loss_timeout_sec": 0, 00:23:04.902 "reconnect_delay_sec": 0, 00:23:04.902 "fast_io_fail_timeout_sec": 0, 00:23:04.902 "disable_auto_failback": false, 00:23:04.902 "generate_uuids": false, 00:23:04.902 "transport_tos": 0, 00:23:04.902 "nvme_error_stat": false, 00:23:04.902 "rdma_srq_size": 0, 00:23:04.902 "io_path_stat": false, 00:23:04.902 "allow_accel_sequence": false, 00:23:04.902 "rdma_max_cq_size": 0, 00:23:04.902 "rdma_cm_event_timeout_ms": 0, 00:23:04.902 "dhchap_digests": [ 00:23:04.902 "sha256", 00:23:04.902 "sha384", 00:23:04.902 "sha512" 00:23:04.902 ], 00:23:04.902 "dhchap_dhgroups": [ 00:23:04.902 "null", 00:23:04.902 "ffdhe2048", 00:23:04.902 "ffdhe3072", 00:23:04.902 "ffdhe4096", 00:23:04.902 "ffdhe6144", 00:23:04.902 "ffdhe8192" 00:23:04.902 ] 00:23:04.902 } 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "method": "bdev_nvme_set_hotplug", 00:23:04.902 "params": { 00:23:04.902 "period_us": 100000, 00:23:04.902 "enable": false 00:23:04.902 } 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "method": "bdev_malloc_create", 00:23:04.902 "params": { 00:23:04.902 "name": "malloc0", 00:23:04.902 "num_blocks": 8192, 00:23:04.902 "block_size": 4096, 00:23:04.902 "physical_block_size": 4096, 00:23:04.902 "uuid": "31576b11-db94-4102-86d5-fddf99b72c79", 00:23:04.902 "optimal_io_boundary": 0, 00:23:04.902 "md_size": 0, 00:23:04.902 "dif_type": 0, 00:23:04.902 "dif_is_head_of_md": false, 00:23:04.902 "dif_pi_format": 0 00:23:04.902 } 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "method": "bdev_wait_for_examine" 00:23:04.902 } 00:23:04.902 ] 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "subsystem": "nbd", 00:23:04.902 "config": [] 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "subsystem": "scheduler", 00:23:04.902 "config": [ 00:23:04.902 { 00:23:04.902 "method": "framework_set_scheduler", 00:23:04.902 "params": { 00:23:04.902 "name": "static" 00:23:04.902 } 00:23:04.902 } 00:23:04.902 ] 00:23:04.902 }, 00:23:04.902 { 00:23:04.902 "subsystem": "nvmf", 00:23:04.902 "config": [ 00:23:04.902 { 00:23:04.902 "method": "nvmf_set_config", 00:23:04.902 "params": { 00:23:04.902 "discovery_filter": "match_any", 00:23:04.902 "admin_cmd_passthru": { 00:23:04.902 "identify_ctrlr": false 00:23:04.902 }, 00:23:04.902 "dhchap_digests": [ 00:23:04.902 "sha256", 00:23:04.902 "sha384", 00:23:04.902 "sha512" 00:23:04.902 ], 00:23:04.902 "dhchap_dhgroups": [ 00:23:04.902 "null", 00:23:04.902 "ffdhe2048", 00:23:04.902 "ffdhe3072", 00:23:04.902 "ffdhe4096", 00:23:04.903 "ffdhe6144", 00:23:04.903 "ffdhe8192" 00:23:04.903 ] 00:23:04.903 } 00:23:04.903 }, 00:23:04.903 { 00:23:04.903 "method": "nvmf_set_max_subsystems", 00:23:04.903 "params": { 00:23:04.903 "max_subsystems": 1024 00:23:04.903 } 00:23:04.903 }, 00:23:04.903 { 00:23:04.903 "method": "nvmf_set_crdt", 00:23:04.903 "params": { 00:23:04.903 "crdt1": 0, 00:23:04.903 "crdt2": 0, 00:23:04.903 "crdt3": 0 00:23:04.903 } 00:23:04.903 }, 00:23:04.903 { 00:23:04.903 "method": "nvmf_create_transport", 00:23:04.903 "params": { 00:23:04.903 "trtype": "TCP", 00:23:04.903 "max_queue_depth": 128, 00:23:04.903 "max_io_qpairs_per_ctrlr": 127, 00:23:04.903 "in_capsule_data_size": 4096, 00:23:04.903 "max_io_size": 131072, 00:23:04.903 "io_unit_size": 131072, 00:23:04.903 "max_aq_depth": 128, 00:23:04.903 "num_shared_buffers": 511, 00:23:04.903 "buf_cache_size": 4294967295, 00:23:04.903 "dif_insert_or_strip": false, 00:23:04.903 "zcopy": false, 00:23:04.903 "c2h_success": false, 00:23:04.903 "sock_priority": 0, 00:23:04.903 "abort_timeout_sec": 1, 00:23:04.903 "ack_timeout": 0, 00:23:04.903 "data_wr_pool_size": 0 00:23:04.903 } 00:23:04.903 }, 00:23:04.903 { 00:23:04.903 "method": "nvmf_create_subsystem", 00:23:04.903 "params": { 00:23:04.903 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.903 "allow_any_host": false, 00:23:04.903 "serial_number": "SPDK00000000000001", 00:23:04.903 "model_number": "SPDK bdev Controller", 00:23:04.903 "max_namespaces": 10, 00:23:04.903 "min_cntlid": 1, 00:23:04.903 "max_cntlid": 65519, 00:23:04.903 "ana_reporting": false 00:23:04.903 } 00:23:04.903 }, 00:23:04.903 { 00:23:04.903 "method": "nvmf_subsystem_add_host", 00:23:04.903 "params": { 00:23:04.903 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.903 "host": "nqn.2016-06.io.spdk:host1", 00:23:04.903 "psk": "key0" 00:23:04.903 } 00:23:04.903 }, 00:23:04.903 { 00:23:04.903 "method": "nvmf_subsystem_add_ns", 00:23:04.903 "params": { 00:23:04.903 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.903 "namespace": { 00:23:04.903 "nsid": 1, 00:23:04.903 "bdev_name": "malloc0", 00:23:04.903 "nguid": "31576B11DB94410286D5FDDF99B72C79", 00:23:04.903 "uuid": "31576b11-db94-4102-86d5-fddf99b72c79", 00:23:04.903 "no_auto_visible": false 00:23:04.903 } 00:23:04.903 } 00:23:04.903 }, 00:23:04.903 { 00:23:04.903 "method": "nvmf_subsystem_add_listener", 00:23:04.903 "params": { 00:23:04.903 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.903 "listen_address": { 00:23:04.903 "trtype": "TCP", 00:23:04.903 "adrfam": "IPv4", 00:23:04.903 "traddr": "10.0.0.2", 00:23:04.903 "trsvcid": "4420" 00:23:04.903 }, 00:23:04.903 "secure_channel": true 00:23:04.903 } 00:23:04.903 } 00:23:04.903 ] 00:23:04.903 } 00:23:04.903 ] 00:23:04.903 }' 00:23:04.903 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:05.162 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:05.162 "subsystems": [ 00:23:05.162 { 00:23:05.162 "subsystem": "keyring", 00:23:05.162 "config": [ 00:23:05.162 { 00:23:05.162 "method": "keyring_file_add_key", 00:23:05.162 "params": { 00:23:05.162 "name": "key0", 00:23:05.162 "path": "/tmp/tmp.ajB8RVuSSK" 00:23:05.162 } 00:23:05.162 } 00:23:05.162 ] 00:23:05.162 }, 00:23:05.162 { 00:23:05.162 "subsystem": "iobuf", 00:23:05.162 "config": [ 00:23:05.162 { 00:23:05.162 "method": "iobuf_set_options", 00:23:05.162 "params": { 00:23:05.162 "small_pool_count": 8192, 00:23:05.162 "large_pool_count": 1024, 00:23:05.162 "small_bufsize": 8192, 00:23:05.162 "large_bufsize": 135168, 00:23:05.162 "enable_numa": false 00:23:05.162 } 00:23:05.162 } 00:23:05.162 ] 00:23:05.162 }, 00:23:05.162 { 00:23:05.162 "subsystem": "sock", 00:23:05.162 "config": [ 00:23:05.162 { 00:23:05.162 "method": "sock_set_default_impl", 00:23:05.162 "params": { 00:23:05.162 "impl_name": "posix" 00:23:05.162 } 00:23:05.162 }, 00:23:05.162 { 00:23:05.162 "method": "sock_impl_set_options", 00:23:05.162 "params": { 00:23:05.162 "impl_name": "ssl", 00:23:05.162 "recv_buf_size": 4096, 00:23:05.162 "send_buf_size": 4096, 00:23:05.162 "enable_recv_pipe": true, 00:23:05.162 "enable_quickack": false, 00:23:05.162 "enable_placement_id": 0, 00:23:05.162 "enable_zerocopy_send_server": true, 00:23:05.162 "enable_zerocopy_send_client": false, 00:23:05.162 "zerocopy_threshold": 0, 00:23:05.162 "tls_version": 0, 00:23:05.162 "enable_ktls": false 00:23:05.162 } 00:23:05.162 }, 00:23:05.162 { 00:23:05.162 "method": "sock_impl_set_options", 00:23:05.162 "params": { 00:23:05.162 "impl_name": "posix", 00:23:05.162 "recv_buf_size": 2097152, 00:23:05.162 "send_buf_size": 2097152, 00:23:05.162 "enable_recv_pipe": true, 00:23:05.162 "enable_quickack": false, 00:23:05.162 "enable_placement_id": 0, 00:23:05.162 "enable_zerocopy_send_server": true, 00:23:05.162 "enable_zerocopy_send_client": false, 00:23:05.162 "zerocopy_threshold": 0, 00:23:05.162 "tls_version": 0, 00:23:05.162 "enable_ktls": false 00:23:05.162 } 00:23:05.162 } 00:23:05.162 ] 00:23:05.162 }, 00:23:05.162 { 00:23:05.162 "subsystem": "vmd", 00:23:05.162 "config": [] 00:23:05.162 }, 00:23:05.162 { 00:23:05.162 "subsystem": "accel", 00:23:05.162 "config": [ 00:23:05.162 { 00:23:05.162 "method": "accel_set_options", 00:23:05.162 "params": { 00:23:05.162 "small_cache_size": 128, 00:23:05.162 "large_cache_size": 16, 00:23:05.162 "task_count": 2048, 00:23:05.162 "sequence_count": 2048, 00:23:05.162 "buf_count": 2048 00:23:05.162 } 00:23:05.162 } 00:23:05.162 ] 00:23:05.162 }, 00:23:05.162 { 00:23:05.162 "subsystem": "bdev", 00:23:05.162 "config": [ 00:23:05.162 { 00:23:05.162 "method": "bdev_set_options", 00:23:05.162 "params": { 00:23:05.162 "bdev_io_pool_size": 65535, 00:23:05.162 "bdev_io_cache_size": 256, 00:23:05.162 "bdev_auto_examine": true, 00:23:05.162 "iobuf_small_cache_size": 128, 00:23:05.162 "iobuf_large_cache_size": 16 00:23:05.162 } 00:23:05.162 }, 00:23:05.162 { 00:23:05.162 "method": "bdev_raid_set_options", 00:23:05.162 "params": { 00:23:05.162 "process_window_size_kb": 1024, 00:23:05.162 "process_max_bandwidth_mb_sec": 0 00:23:05.162 } 00:23:05.162 }, 00:23:05.162 { 00:23:05.162 "method": "bdev_iscsi_set_options", 00:23:05.162 "params": { 00:23:05.162 "timeout_sec": 30 00:23:05.162 } 00:23:05.162 }, 00:23:05.162 { 00:23:05.162 "method": "bdev_nvme_set_options", 00:23:05.162 "params": { 00:23:05.163 "action_on_timeout": "none", 00:23:05.163 "timeout_us": 0, 00:23:05.163 "timeout_admin_us": 0, 00:23:05.163 "keep_alive_timeout_ms": 10000, 00:23:05.163 "arbitration_burst": 0, 00:23:05.163 "low_priority_weight": 0, 00:23:05.163 "medium_priority_weight": 0, 00:23:05.163 "high_priority_weight": 0, 00:23:05.163 "nvme_adminq_poll_period_us": 10000, 00:23:05.163 "nvme_ioq_poll_period_us": 0, 00:23:05.163 "io_queue_requests": 512, 00:23:05.163 "delay_cmd_submit": true, 00:23:05.163 "transport_retry_count": 4, 00:23:05.163 "bdev_retry_count": 3, 00:23:05.163 "transport_ack_timeout": 0, 00:23:05.163 "ctrlr_loss_timeout_sec": 0, 00:23:05.163 "reconnect_delay_sec": 0, 00:23:05.163 "fast_io_fail_timeout_sec": 0, 00:23:05.163 "disable_auto_failback": false, 00:23:05.163 "generate_uuids": false, 00:23:05.163 "transport_tos": 0, 00:23:05.163 "nvme_error_stat": false, 00:23:05.163 "rdma_srq_size": 0, 00:23:05.163 "io_path_stat": false, 00:23:05.163 "allow_accel_sequence": false, 00:23:05.163 "rdma_max_cq_size": 0, 00:23:05.163 "rdma_cm_event_timeout_ms": 0, 00:23:05.163 "dhchap_digests": [ 00:23:05.163 "sha256", 00:23:05.163 "sha384", 00:23:05.163 "sha512" 00:23:05.163 ], 00:23:05.163 "dhchap_dhgroups": [ 00:23:05.163 "null", 00:23:05.163 "ffdhe2048", 00:23:05.163 "ffdhe3072", 00:23:05.163 "ffdhe4096", 00:23:05.163 "ffdhe6144", 00:23:05.163 "ffdhe8192" 00:23:05.163 ] 00:23:05.163 } 00:23:05.163 }, 00:23:05.163 { 00:23:05.163 "method": "bdev_nvme_attach_controller", 00:23:05.163 "params": { 00:23:05.163 "name": "TLSTEST", 00:23:05.163 "trtype": "TCP", 00:23:05.163 "adrfam": "IPv4", 00:23:05.163 "traddr": "10.0.0.2", 00:23:05.163 "trsvcid": "4420", 00:23:05.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.163 "prchk_reftag": false, 00:23:05.163 "prchk_guard": false, 00:23:05.163 "ctrlr_loss_timeout_sec": 0, 00:23:05.163 "reconnect_delay_sec": 0, 00:23:05.163 "fast_io_fail_timeout_sec": 0, 00:23:05.163 "psk": "key0", 00:23:05.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:05.163 "hdgst": false, 00:23:05.163 "ddgst": false, 00:23:05.163 "multipath": "multipath" 00:23:05.163 } 00:23:05.163 }, 00:23:05.163 { 00:23:05.163 "method": "bdev_nvme_set_hotplug", 00:23:05.163 "params": { 00:23:05.163 "period_us": 100000, 00:23:05.163 "enable": false 00:23:05.163 } 00:23:05.163 }, 00:23:05.163 { 00:23:05.163 "method": "bdev_wait_for_examine" 00:23:05.163 } 00:23:05.163 ] 00:23:05.163 }, 00:23:05.163 { 00:23:05.163 "subsystem": "nbd", 00:23:05.163 "config": [] 00:23:05.163 } 00:23:05.163 ] 00:23:05.163 }' 00:23:05.163 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 688340 00:23:05.163 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 688340 ']' 00:23:05.163 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 688340 00:23:05.163 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:05.163 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.163 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 688340 00:23:05.163 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:05.163 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:05.163 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 688340' 00:23:05.163 killing process with pid 688340 00:23:05.163 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 688340 00:23:05.163 Received shutdown signal, test time was about 10.000000 seconds 00:23:05.163 00:23:05.163 Latency(us) 00:23:05.163 [2024-12-05T12:54:47.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.163 [2024-12-05T12:54:47.750Z] =================================================================================================================== 00:23:05.163 [2024-12-05T12:54:47.750Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:05.163 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 688340 00:23:05.421 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 688006 00:23:05.421 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 688006 ']' 00:23:05.421 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 688006 00:23:05.421 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:05.421 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.421 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 688006 00:23:05.421 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:05.421 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:05.421 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 688006' 00:23:05.421 killing process with pid 688006 00:23:05.421 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 688006 00:23:05.421 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 688006 00:23:05.680 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:05.680 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:05.680 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.680 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:05.680 "subsystems": [ 00:23:05.680 { 00:23:05.680 "subsystem": "keyring", 00:23:05.680 "config": [ 00:23:05.680 { 00:23:05.680 "method": "keyring_file_add_key", 00:23:05.680 "params": { 00:23:05.680 "name": "key0", 00:23:05.680 "path": "/tmp/tmp.ajB8RVuSSK" 00:23:05.680 } 00:23:05.680 } 00:23:05.680 ] 00:23:05.680 }, 00:23:05.680 { 00:23:05.680 "subsystem": "iobuf", 00:23:05.680 "config": [ 00:23:05.680 { 00:23:05.680 "method": "iobuf_set_options", 00:23:05.680 "params": { 00:23:05.680 "small_pool_count": 8192, 00:23:05.680 "large_pool_count": 1024, 00:23:05.680 "small_bufsize": 8192, 00:23:05.680 "large_bufsize": 135168, 00:23:05.680 "enable_numa": false 00:23:05.680 } 00:23:05.680 } 00:23:05.680 ] 00:23:05.680 }, 00:23:05.680 { 00:23:05.680 "subsystem": "sock", 00:23:05.680 "config": [ 00:23:05.680 { 00:23:05.680 "method": "sock_set_default_impl", 00:23:05.680 "params": { 00:23:05.680 "impl_name": "posix" 00:23:05.680 } 00:23:05.680 }, 00:23:05.680 { 00:23:05.680 "method": "sock_impl_set_options", 00:23:05.680 "params": { 00:23:05.680 "impl_name": "ssl", 00:23:05.680 "recv_buf_size": 4096, 00:23:05.680 "send_buf_size": 4096, 00:23:05.680 "enable_recv_pipe": true, 00:23:05.680 "enable_quickack": false, 00:23:05.680 "enable_placement_id": 0, 00:23:05.680 "enable_zerocopy_send_server": true, 00:23:05.680 "enable_zerocopy_send_client": false, 00:23:05.680 "zerocopy_threshold": 0, 00:23:05.680 "tls_version": 0, 00:23:05.680 "enable_ktls": false 00:23:05.680 } 00:23:05.680 }, 00:23:05.680 { 00:23:05.680 "method": "sock_impl_set_options", 00:23:05.680 "params": { 00:23:05.680 "impl_name": "posix", 00:23:05.680 "recv_buf_size": 2097152, 00:23:05.680 "send_buf_size": 2097152, 00:23:05.680 "enable_recv_pipe": true, 00:23:05.680 "enable_quickack": false, 00:23:05.680 "enable_placement_id": 0, 00:23:05.680 "enable_zerocopy_send_server": true, 00:23:05.680 "enable_zerocopy_send_client": false, 00:23:05.680 "zerocopy_threshold": 0, 00:23:05.680 "tls_version": 0, 00:23:05.680 "enable_ktls": false 00:23:05.680 } 00:23:05.680 } 00:23:05.680 ] 00:23:05.680 }, 00:23:05.680 { 00:23:05.680 "subsystem": "vmd", 00:23:05.680 "config": [] 00:23:05.680 }, 00:23:05.680 { 00:23:05.680 "subsystem": "accel", 00:23:05.680 "config": [ 00:23:05.680 { 00:23:05.680 "method": "accel_set_options", 00:23:05.680 "params": { 00:23:05.680 "small_cache_size": 128, 00:23:05.680 "large_cache_size": 16, 00:23:05.680 "task_count": 2048, 00:23:05.680 "sequence_count": 2048, 00:23:05.680 "buf_count": 2048 00:23:05.680 } 00:23:05.680 } 00:23:05.680 ] 00:23:05.680 }, 00:23:05.680 { 00:23:05.680 "subsystem": "bdev", 00:23:05.680 "config": [ 00:23:05.680 { 00:23:05.680 "method": "bdev_set_options", 00:23:05.680 "params": { 00:23:05.680 "bdev_io_pool_size": 65535, 00:23:05.680 "bdev_io_cache_size": 256, 00:23:05.680 "bdev_auto_examine": true, 00:23:05.680 "iobuf_small_cache_size": 128, 00:23:05.680 "iobuf_large_cache_size": 16 00:23:05.680 } 00:23:05.680 }, 00:23:05.680 { 00:23:05.680 "method": "bdev_raid_set_options", 00:23:05.680 "params": { 00:23:05.680 "process_window_size_kb": 1024, 00:23:05.680 "process_max_bandwidth_mb_sec": 0 00:23:05.680 } 00:23:05.680 }, 00:23:05.680 { 00:23:05.680 "method": "bdev_iscsi_set_options", 00:23:05.680 "params": { 00:23:05.680 "timeout_sec": 30 00:23:05.680 } 00:23:05.680 }, 00:23:05.680 { 00:23:05.680 "method": "bdev_nvme_set_options", 00:23:05.680 "params": { 00:23:05.680 "action_on_timeout": "none", 00:23:05.680 "timeout_us": 0, 00:23:05.680 "timeout_admin_us": 0, 00:23:05.680 "keep_alive_timeout_ms": 10000, 00:23:05.680 "arbitration_burst": 0, 00:23:05.680 "low_priority_weight": 0, 00:23:05.680 "medium_priority_weight": 0, 00:23:05.680 "high_priority_weight": 0, 00:23:05.680 "nvme_adminq_poll_period_us": 10000, 00:23:05.680 "nvme_ioq_poll_period_us": 0, 00:23:05.680 "io_queue_requests": 0, 00:23:05.680 "delay_cmd_submit": true, 00:23:05.680 "transport_retry_count": 4, 00:23:05.680 "bdev_retry_count": 3, 00:23:05.680 "transport_ack_timeout": 0, 00:23:05.680 "ctrlr_loss_timeout_sec": 0, 00:23:05.680 "reconnect_delay_sec": 0, 00:23:05.680 "fast_io_fail_timeout_sec": 0, 00:23:05.680 "disable_auto_failback": false, 00:23:05.680 "generate_uuids": false, 00:23:05.680 "transport_tos": 0, 00:23:05.680 "nvme_error_stat": false, 00:23:05.680 "rdma_srq_size": 0, 00:23:05.680 "io_path_stat": false, 00:23:05.680 "allow_accel_sequence": false, 00:23:05.680 "rdma_max_cq_size": 0, 00:23:05.680 "rdma_cm_event_timeout_ms": 0, 00:23:05.680 "dhchap_digests": [ 00:23:05.680 "sha256", 00:23:05.680 "sha384", 00:23:05.680 "sha512" 00:23:05.680 ], 00:23:05.680 "dhchap_dhgroups": [ 00:23:05.680 "null", 00:23:05.680 "ffdhe2048", 00:23:05.680 "ffdhe3072", 00:23:05.680 "ffdhe4096", 00:23:05.680 "ffdhe6144", 00:23:05.680 "ffdhe8192" 00:23:05.680 ] 00:23:05.680 } 00:23:05.680 }, 00:23:05.680 { 00:23:05.680 "method": "bdev_nvme_set_hotplug", 00:23:05.681 "params": { 00:23:05.681 "period_us": 100000, 00:23:05.681 "enable": false 00:23:05.681 } 00:23:05.681 }, 00:23:05.681 { 00:23:05.681 "method": "bdev_malloc_create", 00:23:05.681 "params": { 00:23:05.681 "name": "malloc0", 00:23:05.681 "num_blocks": 8192, 00:23:05.681 "block_size": 4096, 00:23:05.681 "physical_block_size": 4096, 00:23:05.681 "uuid": "31576b11-db94-4102-86d5-fddf99b72c79", 00:23:05.681 "optimal_io_boundary": 0, 00:23:05.681 "md_size": 0, 00:23:05.681 "dif_type": 0, 00:23:05.681 "dif_is_head_of_md": false, 00:23:05.681 "dif_pi_format": 0 00:23:05.681 } 00:23:05.681 }, 00:23:05.681 { 00:23:05.681 "method": "bdev_wait_for_examine" 00:23:05.681 } 00:23:05.681 ] 00:23:05.681 }, 00:23:05.681 { 00:23:05.681 "subsystem": "nbd", 00:23:05.681 "config": [] 00:23:05.681 }, 00:23:05.681 { 00:23:05.681 "subsystem": "scheduler", 00:23:05.681 "config": [ 00:23:05.681 { 00:23:05.681 "method": "framework_set_scheduler", 00:23:05.681 "params": { 00:23:05.681 "name": "static" 00:23:05.681 } 00:23:05.681 } 00:23:05.681 ] 00:23:05.681 }, 00:23:05.681 { 00:23:05.681 "subsystem": "nvmf", 00:23:05.681 "config": [ 00:23:05.681 { 00:23:05.681 "method": "nvmf_set_config", 00:23:05.681 "params": { 00:23:05.681 "discovery_filter": "match_any", 00:23:05.681 "admin_cmd_passthru": { 00:23:05.681 "identify_ctrlr": false 00:23:05.681 }, 00:23:05.681 "dhchap_digests": [ 00:23:05.681 "sha256", 00:23:05.681 "sha384", 00:23:05.681 "sha512" 00:23:05.681 ], 00:23:05.681 "dhchap_dhgroups": [ 00:23:05.681 "null", 00:23:05.681 "ffdhe2048", 00:23:05.681 "ffdhe3072", 00:23:05.681 "ffdhe4096", 00:23:05.681 "ffdhe6144", 00:23:05.681 "ffdhe8192" 00:23:05.681 ] 00:23:05.681 } 00:23:05.681 }, 00:23:05.681 { 00:23:05.681 "method": "nvmf_set_max_subsystems", 00:23:05.681 "params": { 00:23:05.681 "max_subsystems": 1024 00:23:05.681 } 00:23:05.681 }, 00:23:05.681 { 00:23:05.681 "method": "nvmf_set_crdt", 00:23:05.681 "params": { 00:23:05.681 "crdt1": 0, 00:23:05.681 "crdt2": 0, 00:23:05.681 "crdt3": 0 00:23:05.681 } 00:23:05.681 }, 00:23:05.681 { 00:23:05.681 "method": "nvmf_create_transport", 00:23:05.681 "params": { 00:23:05.681 "trtype": "TCP", 00:23:05.681 "max_queue_depth": 128, 00:23:05.681 "max_io_qpairs_per_ctrlr": 127, 00:23:05.681 "in_capsule_data_size": 4096, 00:23:05.681 "max_io_size": 131072, 00:23:05.681 "io_unit_size": 131072, 00:23:05.681 "max_aq_depth": 128, 00:23:05.681 "num_shared_buffers": 511, 00:23:05.681 "buf_cache_size": 4294967295, 00:23:05.681 "dif_insert_or_strip": false, 00:23:05.681 "zcopy": false, 00:23:05.681 "c2h_success": false, 00:23:05.681 "sock_priority": 0, 00:23:05.681 "abort_timeout_sec": 1, 00:23:05.681 "ack_timeout": 0, 00:23:05.681 "data_wr_pool_size": 0 00:23:05.681 } 00:23:05.681 }, 00:23:05.681 { 00:23:05.681 "method": "nvmf_create_subsystem", 00:23:05.681 "params": { 00:23:05.681 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.681 "allow_any_host": false, 00:23:05.681 "serial_number": "SPDK00000000000001", 00:23:05.681 "model_number": "SPDK bdev Controller", 00:23:05.681 "max_namespaces": 10, 00:23:05.681 "min_cntlid": 1, 00:23:05.681 "max_cntlid": 65519, 00:23:05.681 "ana_reporting": false 00:23:05.681 } 00:23:05.681 }, 00:23:05.681 { 00:23:05.681 "method": "nvmf_subsystem_add_host", 00:23:05.681 "params": { 00:23:05.681 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.681 "host": "nqn.2016-06.io.spdk:host1", 00:23:05.681 "psk": "key0" 00:23:05.681 } 00:23:05.681 }, 00:23:05.681 { 00:23:05.681 "method": "nvmf_subsystem_add_ns", 00:23:05.681 "params": { 00:23:05.681 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.681 "namespace": { 00:23:05.681 "nsid": 1, 00:23:05.681 "bdev_name": "malloc0", 00:23:05.681 "nguid": "31576B11DB94410286D5FDDF99B72C79", 00:23:05.681 "uuid": "31576b11-db94-4102-86d5-fddf99b72c79", 00:23:05.681 "no_auto_visible": false 00:23:05.681 } 00:23:05.681 } 00:23:05.681 }, 00:23:05.681 { 00:23:05.681 "method": "nvmf_subsystem_add_listener", 00:23:05.681 "params": { 00:23:05.681 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.681 "listen_address": { 00:23:05.681 "trtype": "TCP", 00:23:05.681 "adrfam": "IPv4", 00:23:05.681 "traddr": "10.0.0.2", 00:23:05.681 "trsvcid": "4420" 00:23:05.681 }, 00:23:05.681 "secure_channel": true 00:23:05.681 } 00:23:05.681 } 00:23:05.681 ] 00:23:05.681 } 00:23:05.681 ] 00:23:05.681 }' 00:23:05.681 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.681 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=688726 00:23:05.681 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:05.681 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 688726 00:23:05.681 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 688726 ']' 00:23:05.681 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.681 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.681 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.681 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.681 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.681 [2024-12-05 13:54:48.105066] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:05.681 [2024-12-05 13:54:48.105112] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.681 [2024-12-05 13:54:48.180311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.681 [2024-12-05 13:54:48.217960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.681 [2024-12-05 13:54:48.217994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.681 [2024-12-05 13:54:48.218001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.681 [2024-12-05 13:54:48.218006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.681 [2024-12-05 13:54:48.218013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.681 [2024-12-05 13:54:48.218629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.939 [2024-12-05 13:54:48.431622] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.939 [2024-12-05 13:54:48.463636] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.939 [2024-12-05 13:54:48.463841] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=688758 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 688758 /var/tmp/bdevperf.sock 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 688758 ']' 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.506 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:06.506 "subsystems": [ 00:23:06.506 { 00:23:06.506 "subsystem": "keyring", 00:23:06.506 "config": [ 00:23:06.506 { 00:23:06.506 "method": "keyring_file_add_key", 00:23:06.506 "params": { 00:23:06.506 "name": "key0", 00:23:06.506 "path": "/tmp/tmp.ajB8RVuSSK" 00:23:06.506 } 00:23:06.506 } 00:23:06.506 ] 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "subsystem": "iobuf", 00:23:06.506 "config": [ 00:23:06.506 { 00:23:06.506 "method": "iobuf_set_options", 00:23:06.506 "params": { 00:23:06.506 "small_pool_count": 8192, 00:23:06.506 "large_pool_count": 1024, 00:23:06.506 "small_bufsize": 8192, 00:23:06.506 "large_bufsize": 135168, 00:23:06.506 "enable_numa": false 00:23:06.506 } 00:23:06.506 } 00:23:06.506 ] 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "subsystem": "sock", 00:23:06.506 "config": [ 00:23:06.506 { 00:23:06.506 "method": "sock_set_default_impl", 00:23:06.506 "params": { 00:23:06.506 "impl_name": "posix" 00:23:06.506 } 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "method": "sock_impl_set_options", 00:23:06.506 "params": { 00:23:06.506 "impl_name": "ssl", 00:23:06.506 "recv_buf_size": 4096, 00:23:06.506 "send_buf_size": 4096, 00:23:06.506 "enable_recv_pipe": true, 00:23:06.506 "enable_quickack": false, 00:23:06.506 "enable_placement_id": 0, 00:23:06.506 "enable_zerocopy_send_server": true, 00:23:06.506 "enable_zerocopy_send_client": false, 00:23:06.506 "zerocopy_threshold": 0, 00:23:06.506 "tls_version": 0, 00:23:06.506 "enable_ktls": false 00:23:06.506 } 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "method": "sock_impl_set_options", 00:23:06.506 "params": { 00:23:06.506 "impl_name": "posix", 00:23:06.506 "recv_buf_size": 2097152, 00:23:06.506 "send_buf_size": 2097152, 00:23:06.506 "enable_recv_pipe": true, 00:23:06.506 "enable_quickack": false, 00:23:06.506 "enable_placement_id": 0, 00:23:06.506 "enable_zerocopy_send_server": true, 00:23:06.506 "enable_zerocopy_send_client": false, 00:23:06.506 "zerocopy_threshold": 0, 00:23:06.506 "tls_version": 0, 00:23:06.506 "enable_ktls": false 00:23:06.506 } 00:23:06.506 } 00:23:06.506 ] 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "subsystem": "vmd", 00:23:06.506 "config": [] 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "subsystem": "accel", 00:23:06.506 "config": [ 00:23:06.506 { 00:23:06.506 "method": "accel_set_options", 00:23:06.506 "params": { 00:23:06.506 "small_cache_size": 128, 00:23:06.506 "large_cache_size": 16, 00:23:06.506 "task_count": 2048, 00:23:06.506 "sequence_count": 2048, 00:23:06.506 "buf_count": 2048 00:23:06.506 } 00:23:06.506 } 00:23:06.506 ] 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "subsystem": "bdev", 00:23:06.506 "config": [ 00:23:06.506 { 00:23:06.506 "method": "bdev_set_options", 00:23:06.506 "params": { 00:23:06.506 "bdev_io_pool_size": 65535, 00:23:06.506 "bdev_io_cache_size": 256, 00:23:06.506 "bdev_auto_examine": true, 00:23:06.506 "iobuf_small_cache_size": 128, 00:23:06.506 "iobuf_large_cache_size": 16 00:23:06.506 } 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "method": "bdev_raid_set_options", 00:23:06.506 "params": { 00:23:06.506 "process_window_size_kb": 1024, 00:23:06.506 "process_max_bandwidth_mb_sec": 0 00:23:06.506 } 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "method": "bdev_iscsi_set_options", 00:23:06.506 "params": { 00:23:06.506 "timeout_sec": 30 00:23:06.506 } 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "method": "bdev_nvme_set_options", 00:23:06.506 "params": { 00:23:06.506 "action_on_timeout": "none", 00:23:06.506 "timeout_us": 0, 00:23:06.506 "timeout_admin_us": 0, 00:23:06.506 "keep_alive_timeout_ms": 10000, 00:23:06.506 "arbitration_burst": 0, 00:23:06.506 "low_priority_weight": 0, 00:23:06.506 "medium_priority_weight": 0, 00:23:06.506 "high_priority_weight": 0, 00:23:06.506 "nvme_adminq_poll_period_us": 10000, 00:23:06.506 "nvme_ioq_poll_period_us": 0, 00:23:06.506 "io_queue_requests": 512, 00:23:06.506 "delay_cmd_submit": true, 00:23:06.506 "transport_retry_count": 4, 00:23:06.506 "bdev_retry_count": 3, 00:23:06.506 "transport_ack_timeout": 0, 00:23:06.506 "ctrlr_loss_timeout_sec": 0, 00:23:06.506 "reconnect_delay_sec": 0, 00:23:06.506 "fast_io_fail_timeout_sec": 0, 00:23:06.506 "disable_auto_failback": false, 00:23:06.506 "generate_uuids": false, 00:23:06.506 "transport_tos": 0, 00:23:06.506 "nvme_error_stat": false, 00:23:06.506 "rdma_srq_size": 0, 00:23:06.506 "io_path_stat": false, 00:23:06.506 "allow_accel_sequence": false, 00:23:06.506 "rdma_max_cq_size": 0, 00:23:06.506 "rdma_cm_event_timeout_ms": 0, 00:23:06.506 "dhchap_digests": [ 00:23:06.506 "sha256", 00:23:06.506 "sha384", 00:23:06.506 "sha512" 00:23:06.506 ], 00:23:06.506 "dhchap_dhgroups": [ 00:23:06.506 "null", 00:23:06.506 "ffdhe2048", 00:23:06.506 "ffdhe3072", 00:23:06.506 "ffdhe4096", 00:23:06.506 "ffdhe6144", 00:23:06.506 "ffdhe8192" 00:23:06.506 ] 00:23:06.506 } 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "method": "bdev_nvme_attach_controller", 00:23:06.506 "params": { 00:23:06.506 "name": "TLSTEST", 00:23:06.506 "trtype": "TCP", 00:23:06.506 "adrfam": "IPv4", 00:23:06.506 "traddr": "10.0.0.2", 00:23:06.506 "trsvcid": "4420", 00:23:06.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.506 "prchk_reftag": false, 00:23:06.506 "prchk_guard": false, 00:23:06.506 "ctrlr_loss_timeout_sec": 0, 00:23:06.506 "reconnect_delay_sec": 0, 00:23:06.506 "fast_io_fail_timeout_sec": 0, 00:23:06.506 "psk": "key0", 00:23:06.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.506 "hdgst": false, 00:23:06.506 "ddgst": false, 00:23:06.506 "multipath": "multipath" 00:23:06.506 } 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "method": "bdev_nvme_set_hotplug", 00:23:06.506 "params": { 00:23:06.506 "period_us": 100000, 00:23:06.506 "enable": false 00:23:06.506 } 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "method": "bdev_wait_for_examine" 00:23:06.506 } 00:23:06.506 ] 00:23:06.506 }, 00:23:06.506 { 00:23:06.506 "subsystem": "nbd", 00:23:06.506 "config": [] 00:23:06.506 } 00:23:06.507 ] 00:23:06.507 }' 00:23:06.507 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.507 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.507 [2024-12-05 13:54:49.007182] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:06.507 [2024-12-05 13:54:49.007228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688758 ] 00:23:06.507 [2024-12-05 13:54:49.082310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.764 [2024-12-05 13:54:49.124405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.764 [2024-12-05 13:54:49.276268] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.330 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.330 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:07.330 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:07.589 Running I/O for 10 seconds... 00:23:09.462 5062.00 IOPS, 19.77 MiB/s [2024-12-05T12:54:52.985Z] 5349.00 IOPS, 20.89 MiB/s [2024-12-05T12:54:54.361Z] 5449.67 IOPS, 21.29 MiB/s [2024-12-05T12:54:55.304Z] 5493.25 IOPS, 21.46 MiB/s [2024-12-05T12:54:56.238Z] 5528.00 IOPS, 21.59 MiB/s [2024-12-05T12:54:57.173Z] 5507.00 IOPS, 21.51 MiB/s [2024-12-05T12:54:58.118Z] 5521.43 IOPS, 21.57 MiB/s [2024-12-05T12:54:59.053Z] 5532.38 IOPS, 21.61 MiB/s [2024-12-05T12:54:59.989Z] 5553.78 IOPS, 21.69 MiB/s [2024-12-05T12:54:59.989Z] 5557.60 IOPS, 21.71 MiB/s 00:23:17.402 Latency(us) 00:23:17.402 [2024-12-05T12:54:59.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.402 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:17.402 Verification LBA range: start 0x0 length 0x2000 00:23:17.402 TLSTESTn1 : 10.01 5562.96 21.73 0.00 0.00 22975.81 4712.35 48184.56 00:23:17.402 [2024-12-05T12:54:59.989Z] =================================================================================================================== 00:23:17.402 [2024-12-05T12:54:59.989Z] Total : 5562.96 21.73 0.00 0.00 22975.81 4712.35 48184.56 00:23:17.402 { 00:23:17.402 "results": [ 00:23:17.402 { 00:23:17.402 "job": "TLSTESTn1", 00:23:17.402 "core_mask": "0x4", 00:23:17.402 "workload": "verify", 00:23:17.402 "status": "finished", 00:23:17.402 "verify_range": { 00:23:17.402 "start": 0, 00:23:17.402 "length": 8192 00:23:17.402 }, 00:23:17.402 "queue_depth": 128, 00:23:17.402 "io_size": 4096, 00:23:17.402 "runtime": 10.013203, 00:23:17.402 "iops": 5562.955230209554, 00:23:17.402 "mibps": 21.730293868006072, 00:23:17.402 "io_failed": 0, 00:23:17.402 "io_timeout": 0, 00:23:17.402 "avg_latency_us": 22975.813268157737, 00:23:17.402 "min_latency_us": 4712.350476190476, 00:23:17.402 "max_latency_us": 48184.56380952381 00:23:17.402 } 00:23:17.402 ], 00:23:17.402 "core_count": 1 00:23:17.402 } 00:23:17.661 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.661 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 688758 00:23:17.661 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 688758 ']' 00:23:17.661 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 688758 00:23:17.661 13:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.661 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.661 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 688758 00:23:17.661 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:17.661 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:17.661 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 688758' 00:23:17.661 killing process with pid 688758 00:23:17.661 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 688758 00:23:17.661 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.661 00:23:17.661 Latency(us) 00:23:17.661 [2024-12-05T12:55:00.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.661 [2024-12-05T12:55:00.248Z] =================================================================================================================== 00:23:17.661 [2024-12-05T12:55:00.248Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.661 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 688758 00:23:17.661 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 688726 00:23:17.661 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 688726 ']' 00:23:17.661 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 688726 00:23:17.661 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.661 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.661 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 688726 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 688726' 00:23:17.920 killing process with pid 688726 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 688726 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 688726 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=690632 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 690632 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 690632 ']' 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.920 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.920 [2024-12-05 13:55:00.483966] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:17.920 [2024-12-05 13:55:00.484014] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.218 [2024-12-05 13:55:00.563462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.218 [2024-12-05 13:55:00.605413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.218 [2024-12-05 13:55:00.605450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.218 [2024-12-05 13:55:00.605459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.218 [2024-12-05 13:55:00.605467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.218 [2024-12-05 13:55:00.605473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.218 [2024-12-05 13:55:00.606051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.218 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.218 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.218 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:18.218 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.218 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.218 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.218 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ajB8RVuSSK 00:23:18.218 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ajB8RVuSSK 00:23:18.218 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:18.544 [2024-12-05 13:55:00.919437] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.544 13:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:18.846 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:18.846 [2024-12-05 13:55:01.312444] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.846 [2024-12-05 13:55:01.312662] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.846 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:19.105 malloc0 00:23:19.105 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:19.363 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ajB8RVuSSK 00:23:19.363 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:19.620 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=691072 00:23:19.620 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:19.620 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.620 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 691072 /var/tmp/bdevperf.sock 00:23:19.620 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 691072 ']' 00:23:19.620 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.621 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.621 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.621 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.621 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.621 [2024-12-05 13:55:02.185382] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:19.621 [2024-12-05 13:55:02.185434] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691072 ] 00:23:19.879 [2024-12-05 13:55:02.261438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.879 [2024-12-05 13:55:02.303523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.879 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.879 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:19.879 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ajB8RVuSSK 00:23:20.136 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:20.394 [2024-12-05 13:55:02.760613] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.394 nvme0n1 00:23:20.394 13:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:20.394 Running I/O for 1 seconds... 00:23:21.768 5428.00 IOPS, 21.20 MiB/s 00:23:21.768 Latency(us) 00:23:21.768 [2024-12-05T12:55:04.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.768 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:21.768 Verification LBA range: start 0x0 length 0x2000 00:23:21.768 nvme0n1 : 1.02 5446.90 21.28 0.00 0.00 23283.31 7115.34 26464.06 00:23:21.768 [2024-12-05T12:55:04.355Z] =================================================================================================================== 00:23:21.768 [2024-12-05T12:55:04.355Z] Total : 5446.90 21.28 0.00 0.00 23283.31 7115.34 26464.06 00:23:21.768 { 00:23:21.768 "results": [ 00:23:21.768 { 00:23:21.768 "job": "nvme0n1", 00:23:21.768 "core_mask": "0x2", 00:23:21.768 "workload": "verify", 00:23:21.768 "status": "finished", 00:23:21.768 "verify_range": { 00:23:21.768 "start": 0, 00:23:21.768 "length": 8192 00:23:21.768 }, 00:23:21.768 "queue_depth": 128, 00:23:21.768 "io_size": 4096, 00:23:21.768 "runtime": 1.02003, 00:23:21.768 "iops": 5446.8986206288055, 00:23:21.768 "mibps": 21.27694773683127, 00:23:21.768 "io_failed": 0, 00:23:21.768 "io_timeout": 0, 00:23:21.768 "avg_latency_us": 23283.30751139909, 00:23:21.768 "min_latency_us": 7115.337142857143, 00:23:21.768 "max_latency_us": 26464.06095238095 00:23:21.768 } 00:23:21.768 ], 00:23:21.768 "core_count": 1 00:23:21.768 } 00:23:21.768 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 691072 00:23:21.768 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 691072 ']' 00:23:21.768 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 691072 00:23:21.768 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.768 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.768 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 691072 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 691072' 00:23:21.768 killing process with pid 691072 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 691072 00:23:21.768 Received shutdown signal, test time was about 1.000000 seconds 00:23:21.768 00:23:21.768 Latency(us) 00:23:21.768 [2024-12-05T12:55:04.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.768 [2024-12-05T12:55:04.355Z] =================================================================================================================== 00:23:21.768 [2024-12-05T12:55:04.355Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 691072 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 690632 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 690632 ']' 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 690632 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 690632 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 690632' 00:23:21.768 killing process with pid 690632 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 690632 00:23:21.768 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 690632 00:23:22.027 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:22.027 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:22.027 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.027 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.027 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=691334 00:23:22.027 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:22.027 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 691334 00:23:22.027 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 691334 ']' 00:23:22.027 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.027 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.027 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.027 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.027 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.027 [2024-12-05 13:55:04.462821] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:22.027 [2024-12-05 13:55:04.462868] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.027 [2024-12-05 13:55:04.537141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.027 [2024-12-05 13:55:04.576891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.027 [2024-12-05 13:55:04.576930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.027 [2024-12-05 13:55:04.576937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.027 [2024-12-05 13:55:04.576944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.027 [2024-12-05 13:55:04.576949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.027 [2024-12-05 13:55:04.577548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.285 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.285 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:22.285 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:22.285 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.285 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.285 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.285 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:22.285 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.285 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.285 [2024-12-05 13:55:04.714300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.285 malloc0 00:23:22.285 [2024-12-05 13:55:04.742518] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.285 [2024-12-05 13:55:04.742729] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.285 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.285 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=691438 00:23:22.286 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 691438 /var/tmp/bdevperf.sock 00:23:22.286 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:22.286 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 691438 ']' 00:23:22.286 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.286 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.286 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.286 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.286 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.286 [2024-12-05 13:55:04.815664] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:22.286 [2024-12-05 13:55:04.815702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691438 ] 00:23:22.606 [2024-12-05 13:55:04.889262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.606 [2024-12-05 13:55:04.931154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.606 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.606 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:22.606 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ajB8RVuSSK 00:23:22.865 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:22.865 [2024-12-05 13:55:05.392314] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.128 nvme0n1 00:23:23.128 13:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:23.128 Running I/O for 1 seconds... 00:23:24.061 5468.00 IOPS, 21.36 MiB/s 00:23:24.061 Latency(us) 00:23:24.061 [2024-12-05T12:55:06.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.061 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:24.061 Verification LBA range: start 0x0 length 0x2000 00:23:24.061 nvme0n1 : 1.01 5528.23 21.59 0.00 0.00 23000.81 4774.77 27213.04 00:23:24.061 [2024-12-05T12:55:06.648Z] =================================================================================================================== 00:23:24.061 [2024-12-05T12:55:06.648Z] Total : 5528.23 21.59 0.00 0.00 23000.81 4774.77 27213.04 00:23:24.061 { 00:23:24.061 "results": [ 00:23:24.061 { 00:23:24.061 "job": "nvme0n1", 00:23:24.061 "core_mask": "0x2", 00:23:24.061 "workload": "verify", 00:23:24.061 "status": "finished", 00:23:24.061 "verify_range": { 00:23:24.061 "start": 0, 00:23:24.061 "length": 8192 00:23:24.061 }, 00:23:24.061 "queue_depth": 128, 00:23:24.061 "io_size": 4096, 00:23:24.061 "runtime": 1.01244, 00:23:24.061 "iops": 5528.228833313579, 00:23:24.061 "mibps": 21.59464388013117, 00:23:24.061 "io_failed": 0, 00:23:24.061 "io_timeout": 0, 00:23:24.061 "avg_latency_us": 23000.809786194986, 00:23:24.061 "min_latency_us": 4774.765714285714, 00:23:24.061 "max_latency_us": 27213.04380952381 00:23:24.061 } 00:23:24.061 ], 00:23:24.061 "core_count": 1 00:23:24.061 } 00:23:24.061 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:24.061 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.061 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.319 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.319 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:24.319 "subsystems": [ 00:23:24.319 { 00:23:24.319 "subsystem": "keyring", 00:23:24.319 "config": [ 00:23:24.319 { 00:23:24.319 "method": "keyring_file_add_key", 00:23:24.319 "params": { 00:23:24.319 "name": "key0", 00:23:24.319 "path": "/tmp/tmp.ajB8RVuSSK" 00:23:24.319 } 00:23:24.319 } 00:23:24.319 ] 00:23:24.319 }, 00:23:24.319 { 00:23:24.319 "subsystem": "iobuf", 00:23:24.319 "config": [ 00:23:24.319 { 00:23:24.319 "method": "iobuf_set_options", 00:23:24.319 "params": { 00:23:24.319 "small_pool_count": 8192, 00:23:24.319 "large_pool_count": 1024, 00:23:24.319 "small_bufsize": 8192, 00:23:24.319 "large_bufsize": 135168, 00:23:24.319 "enable_numa": false 00:23:24.319 } 00:23:24.319 } 00:23:24.319 ] 00:23:24.319 }, 00:23:24.319 { 00:23:24.319 "subsystem": "sock", 00:23:24.319 "config": [ 00:23:24.319 { 00:23:24.319 "method": "sock_set_default_impl", 00:23:24.319 "params": { 00:23:24.319 "impl_name": "posix" 00:23:24.319 } 00:23:24.319 }, 00:23:24.319 { 00:23:24.319 "method": "sock_impl_set_options", 00:23:24.319 "params": { 00:23:24.319 "impl_name": "ssl", 00:23:24.319 "recv_buf_size": 4096, 00:23:24.319 "send_buf_size": 4096, 00:23:24.319 "enable_recv_pipe": true, 00:23:24.319 "enable_quickack": false, 00:23:24.319 "enable_placement_id": 0, 00:23:24.319 "enable_zerocopy_send_server": true, 00:23:24.319 "enable_zerocopy_send_client": false, 00:23:24.319 "zerocopy_threshold": 0, 00:23:24.319 "tls_version": 0, 00:23:24.319 "enable_ktls": false 00:23:24.319 } 00:23:24.319 }, 00:23:24.319 { 00:23:24.319 "method": "sock_impl_set_options", 00:23:24.319 "params": { 00:23:24.319 "impl_name": "posix", 00:23:24.319 "recv_buf_size": 2097152, 00:23:24.319 "send_buf_size": 2097152, 00:23:24.319 "enable_recv_pipe": true, 00:23:24.319 "enable_quickack": false, 00:23:24.319 "enable_placement_id": 0, 00:23:24.319 "enable_zerocopy_send_server": true, 00:23:24.319 "enable_zerocopy_send_client": false, 00:23:24.319 "zerocopy_threshold": 0, 00:23:24.319 "tls_version": 0, 00:23:24.319 "enable_ktls": false 00:23:24.319 } 00:23:24.319 } 00:23:24.319 ] 00:23:24.319 }, 00:23:24.319 { 00:23:24.319 "subsystem": "vmd", 00:23:24.319 "config": [] 00:23:24.319 }, 00:23:24.319 { 00:23:24.319 "subsystem": "accel", 00:23:24.319 "config": [ 00:23:24.319 { 00:23:24.319 "method": "accel_set_options", 00:23:24.319 "params": { 00:23:24.319 "small_cache_size": 128, 00:23:24.319 "large_cache_size": 16, 00:23:24.319 "task_count": 2048, 00:23:24.319 "sequence_count": 2048, 00:23:24.319 "buf_count": 2048 00:23:24.319 } 00:23:24.319 } 00:23:24.319 ] 00:23:24.319 }, 00:23:24.319 { 00:23:24.319 "subsystem": "bdev", 00:23:24.319 "config": [ 00:23:24.319 { 00:23:24.319 "method": "bdev_set_options", 00:23:24.319 "params": { 00:23:24.319 "bdev_io_pool_size": 65535, 00:23:24.319 "bdev_io_cache_size": 256, 00:23:24.319 "bdev_auto_examine": true, 00:23:24.319 "iobuf_small_cache_size": 128, 00:23:24.319 "iobuf_large_cache_size": 16 00:23:24.319 } 00:23:24.319 }, 00:23:24.319 { 00:23:24.319 "method": "bdev_raid_set_options", 00:23:24.319 "params": { 00:23:24.319 "process_window_size_kb": 1024, 00:23:24.319 "process_max_bandwidth_mb_sec": 0 00:23:24.319 } 00:23:24.319 }, 00:23:24.319 { 00:23:24.319 "method": "bdev_iscsi_set_options", 00:23:24.319 "params": { 00:23:24.319 "timeout_sec": 30 00:23:24.319 } 00:23:24.319 }, 00:23:24.319 { 00:23:24.319 "method": "bdev_nvme_set_options", 00:23:24.319 "params": { 00:23:24.319 "action_on_timeout": "none", 00:23:24.319 "timeout_us": 0, 00:23:24.319 "timeout_admin_us": 0, 00:23:24.319 "keep_alive_timeout_ms": 10000, 00:23:24.319 "arbitration_burst": 0, 00:23:24.319 "low_priority_weight": 0, 00:23:24.319 "medium_priority_weight": 0, 00:23:24.319 "high_priority_weight": 0, 00:23:24.320 "nvme_adminq_poll_period_us": 10000, 00:23:24.320 "nvme_ioq_poll_period_us": 0, 00:23:24.320 "io_queue_requests": 0, 00:23:24.320 "delay_cmd_submit": true, 00:23:24.320 "transport_retry_count": 4, 00:23:24.320 "bdev_retry_count": 3, 00:23:24.320 "transport_ack_timeout": 0, 00:23:24.320 "ctrlr_loss_timeout_sec": 0, 00:23:24.320 "reconnect_delay_sec": 0, 00:23:24.320 "fast_io_fail_timeout_sec": 0, 00:23:24.320 "disable_auto_failback": false, 00:23:24.320 "generate_uuids": false, 00:23:24.320 "transport_tos": 0, 00:23:24.320 "nvme_error_stat": false, 00:23:24.320 "rdma_srq_size": 0, 00:23:24.320 "io_path_stat": false, 00:23:24.320 "allow_accel_sequence": false, 00:23:24.320 "rdma_max_cq_size": 0, 00:23:24.320 "rdma_cm_event_timeout_ms": 0, 00:23:24.320 "dhchap_digests": [ 00:23:24.320 "sha256", 00:23:24.320 "sha384", 00:23:24.320 "sha512" 00:23:24.320 ], 00:23:24.320 "dhchap_dhgroups": [ 00:23:24.320 "null", 00:23:24.320 "ffdhe2048", 00:23:24.320 "ffdhe3072", 00:23:24.320 "ffdhe4096", 00:23:24.320 "ffdhe6144", 00:23:24.320 "ffdhe8192" 00:23:24.320 ] 00:23:24.320 } 00:23:24.320 }, 00:23:24.320 { 00:23:24.320 "method": "bdev_nvme_set_hotplug", 00:23:24.320 "params": { 00:23:24.320 "period_us": 100000, 00:23:24.320 "enable": false 00:23:24.320 } 00:23:24.320 }, 00:23:24.320 { 00:23:24.320 "method": "bdev_malloc_create", 00:23:24.320 "params": { 00:23:24.320 "name": "malloc0", 00:23:24.320 "num_blocks": 8192, 00:23:24.320 "block_size": 4096, 00:23:24.320 "physical_block_size": 4096, 00:23:24.320 "uuid": "642aa83a-995e-4287-a76b-b2b4dfe03287", 00:23:24.320 "optimal_io_boundary": 0, 00:23:24.320 "md_size": 0, 00:23:24.320 "dif_type": 0, 00:23:24.320 "dif_is_head_of_md": false, 00:23:24.320 "dif_pi_format": 0 00:23:24.320 } 00:23:24.320 }, 00:23:24.320 { 00:23:24.320 "method": "bdev_wait_for_examine" 00:23:24.320 } 00:23:24.320 ] 00:23:24.320 }, 00:23:24.320 { 00:23:24.320 "subsystem": "nbd", 00:23:24.320 "config": [] 00:23:24.320 }, 00:23:24.320 { 00:23:24.320 "subsystem": "scheduler", 00:23:24.320 "config": [ 00:23:24.320 { 00:23:24.320 "method": "framework_set_scheduler", 00:23:24.320 "params": { 00:23:24.320 "name": "static" 00:23:24.320 } 00:23:24.320 } 00:23:24.320 ] 00:23:24.320 }, 00:23:24.320 { 00:23:24.320 "subsystem": "nvmf", 00:23:24.320 "config": [ 00:23:24.320 { 00:23:24.320 "method": "nvmf_set_config", 00:23:24.320 "params": { 00:23:24.320 "discovery_filter": "match_any", 00:23:24.320 "admin_cmd_passthru": { 00:23:24.320 "identify_ctrlr": false 00:23:24.320 }, 00:23:24.320 "dhchap_digests": [ 00:23:24.320 "sha256", 00:23:24.320 "sha384", 00:23:24.320 "sha512" 00:23:24.320 ], 00:23:24.320 "dhchap_dhgroups": [ 00:23:24.320 "null", 00:23:24.320 "ffdhe2048", 00:23:24.320 "ffdhe3072", 00:23:24.320 "ffdhe4096", 00:23:24.320 "ffdhe6144", 00:23:24.320 "ffdhe8192" 00:23:24.320 ] 00:23:24.320 } 00:23:24.320 }, 00:23:24.320 { 00:23:24.320 "method": "nvmf_set_max_subsystems", 00:23:24.320 "params": { 00:23:24.320 "max_subsystems": 1024 00:23:24.320 } 00:23:24.320 }, 00:23:24.320 { 00:23:24.320 "method": "nvmf_set_crdt", 00:23:24.320 "params": { 00:23:24.320 "crdt1": 0, 00:23:24.320 "crdt2": 0, 00:23:24.320 "crdt3": 0 00:23:24.320 } 00:23:24.320 }, 00:23:24.320 { 00:23:24.320 "method": "nvmf_create_transport", 00:23:24.320 "params": { 00:23:24.320 "trtype": "TCP", 00:23:24.320 "max_queue_depth": 128, 00:23:24.320 "max_io_qpairs_per_ctrlr": 127, 00:23:24.320 "in_capsule_data_size": 4096, 00:23:24.320 "max_io_size": 131072, 00:23:24.320 "io_unit_size": 131072, 00:23:24.320 "max_aq_depth": 128, 00:23:24.320 "num_shared_buffers": 511, 00:23:24.320 "buf_cache_size": 4294967295, 00:23:24.320 "dif_insert_or_strip": false, 00:23:24.320 "zcopy": false, 00:23:24.320 "c2h_success": false, 00:23:24.320 "sock_priority": 0, 00:23:24.320 "abort_timeout_sec": 1, 00:23:24.320 "ack_timeout": 0, 00:23:24.320 "data_wr_pool_size": 0 00:23:24.320 } 00:23:24.320 }, 00:23:24.320 { 00:23:24.320 "method": "nvmf_create_subsystem", 00:23:24.320 "params": { 00:23:24.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.320 "allow_any_host": false, 00:23:24.320 "serial_number": "00000000000000000000", 00:23:24.320 "model_number": "SPDK bdev Controller", 00:23:24.320 "max_namespaces": 32, 00:23:24.320 "min_cntlid": 1, 00:23:24.320 "max_cntlid": 65519, 00:23:24.320 "ana_reporting": false 00:23:24.320 } 00:23:24.320 }, 00:23:24.320 { 00:23:24.320 "method": "nvmf_subsystem_add_host", 00:23:24.320 "params": { 00:23:24.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.320 "host": "nqn.2016-06.io.spdk:host1", 00:23:24.320 "psk": "key0" 00:23:24.320 } 00:23:24.320 }, 00:23:24.320 { 00:23:24.320 "method": "nvmf_subsystem_add_ns", 00:23:24.320 "params": { 00:23:24.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.320 "namespace": { 00:23:24.320 "nsid": 1, 00:23:24.320 "bdev_name": "malloc0", 00:23:24.320 "nguid": "642AA83A995E4287A76BB2B4DFE03287", 00:23:24.320 "uuid": "642aa83a-995e-4287-a76b-b2b4dfe03287", 00:23:24.320 "no_auto_visible": false 00:23:24.320 } 00:23:24.320 } 00:23:24.320 }, 00:23:24.320 { 00:23:24.320 "method": "nvmf_subsystem_add_listener", 00:23:24.320 "params": { 00:23:24.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.320 "listen_address": { 00:23:24.320 "trtype": "TCP", 00:23:24.320 "adrfam": "IPv4", 00:23:24.320 "traddr": "10.0.0.2", 00:23:24.320 "trsvcid": "4420" 00:23:24.320 }, 00:23:24.320 "secure_channel": false, 00:23:24.320 "sock_impl": "ssl" 00:23:24.320 } 00:23:24.320 } 00:23:24.320 ] 00:23:24.320 } 00:23:24.320 ] 00:23:24.320 }' 00:23:24.320 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:24.579 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:24.579 "subsystems": [ 00:23:24.579 { 00:23:24.579 "subsystem": "keyring", 00:23:24.579 "config": [ 00:23:24.579 { 00:23:24.579 "method": "keyring_file_add_key", 00:23:24.579 "params": { 00:23:24.579 "name": "key0", 00:23:24.579 "path": "/tmp/tmp.ajB8RVuSSK" 00:23:24.579 } 00:23:24.579 } 00:23:24.579 ] 00:23:24.579 }, 00:23:24.579 { 00:23:24.579 "subsystem": "iobuf", 00:23:24.579 "config": [ 00:23:24.579 { 00:23:24.579 "method": "iobuf_set_options", 00:23:24.579 "params": { 00:23:24.579 "small_pool_count": 8192, 00:23:24.579 "large_pool_count": 1024, 00:23:24.579 "small_bufsize": 8192, 00:23:24.579 "large_bufsize": 135168, 00:23:24.579 "enable_numa": false 00:23:24.579 } 00:23:24.579 } 00:23:24.579 ] 00:23:24.579 }, 00:23:24.579 { 00:23:24.579 "subsystem": "sock", 00:23:24.579 "config": [ 00:23:24.579 { 00:23:24.579 "method": "sock_set_default_impl", 00:23:24.579 "params": { 00:23:24.579 "impl_name": "posix" 00:23:24.579 } 00:23:24.579 }, 00:23:24.579 { 00:23:24.579 "method": "sock_impl_set_options", 00:23:24.579 "params": { 00:23:24.579 "impl_name": "ssl", 00:23:24.579 "recv_buf_size": 4096, 00:23:24.579 "send_buf_size": 4096, 00:23:24.579 "enable_recv_pipe": true, 00:23:24.579 "enable_quickack": false, 00:23:24.579 "enable_placement_id": 0, 00:23:24.579 "enable_zerocopy_send_server": true, 00:23:24.579 "enable_zerocopy_send_client": false, 00:23:24.579 "zerocopy_threshold": 0, 00:23:24.579 "tls_version": 0, 00:23:24.579 "enable_ktls": false 00:23:24.579 } 00:23:24.579 }, 00:23:24.579 { 00:23:24.579 "method": "sock_impl_set_options", 00:23:24.579 "params": { 00:23:24.579 "impl_name": "posix", 00:23:24.579 "recv_buf_size": 2097152, 00:23:24.579 "send_buf_size": 2097152, 00:23:24.579 "enable_recv_pipe": true, 00:23:24.579 "enable_quickack": false, 00:23:24.579 "enable_placement_id": 0, 00:23:24.579 "enable_zerocopy_send_server": true, 00:23:24.579 "enable_zerocopy_send_client": false, 00:23:24.579 "zerocopy_threshold": 0, 00:23:24.579 "tls_version": 0, 00:23:24.579 "enable_ktls": false 00:23:24.579 } 00:23:24.579 } 00:23:24.579 ] 00:23:24.579 }, 00:23:24.579 { 00:23:24.579 "subsystem": "vmd", 00:23:24.579 "config": [] 00:23:24.579 }, 00:23:24.579 { 00:23:24.579 "subsystem": "accel", 00:23:24.579 "config": [ 00:23:24.579 { 00:23:24.579 "method": "accel_set_options", 00:23:24.579 "params": { 00:23:24.579 "small_cache_size": 128, 00:23:24.579 "large_cache_size": 16, 00:23:24.579 "task_count": 2048, 00:23:24.579 "sequence_count": 2048, 00:23:24.579 "buf_count": 2048 00:23:24.579 } 00:23:24.579 } 00:23:24.579 ] 00:23:24.579 }, 00:23:24.579 { 00:23:24.579 "subsystem": "bdev", 00:23:24.579 "config": [ 00:23:24.579 { 00:23:24.579 "method": "bdev_set_options", 00:23:24.579 "params": { 00:23:24.579 "bdev_io_pool_size": 65535, 00:23:24.579 "bdev_io_cache_size": 256, 00:23:24.579 "bdev_auto_examine": true, 00:23:24.579 "iobuf_small_cache_size": 128, 00:23:24.579 "iobuf_large_cache_size": 16 00:23:24.579 } 00:23:24.579 }, 00:23:24.579 { 00:23:24.579 "method": "bdev_raid_set_options", 00:23:24.579 "params": { 00:23:24.579 "process_window_size_kb": 1024, 00:23:24.579 "process_max_bandwidth_mb_sec": 0 00:23:24.579 } 00:23:24.579 }, 00:23:24.579 { 00:23:24.579 "method": "bdev_iscsi_set_options", 00:23:24.579 "params": { 00:23:24.579 "timeout_sec": 30 00:23:24.579 } 00:23:24.579 }, 00:23:24.579 { 00:23:24.579 "method": "bdev_nvme_set_options", 00:23:24.579 "params": { 00:23:24.579 "action_on_timeout": "none", 00:23:24.579 "timeout_us": 0, 00:23:24.579 "timeout_admin_us": 0, 00:23:24.579 "keep_alive_timeout_ms": 10000, 00:23:24.579 "arbitration_burst": 0, 00:23:24.579 "low_priority_weight": 0, 00:23:24.579 "medium_priority_weight": 0, 00:23:24.579 "high_priority_weight": 0, 00:23:24.579 "nvme_adminq_poll_period_us": 10000, 00:23:24.579 "nvme_ioq_poll_period_us": 0, 00:23:24.579 "io_queue_requests": 512, 00:23:24.579 "delay_cmd_submit": true, 00:23:24.579 "transport_retry_count": 4, 00:23:24.579 "bdev_retry_count": 3, 00:23:24.579 "transport_ack_timeout": 0, 00:23:24.579 "ctrlr_loss_timeout_sec": 0, 00:23:24.579 "reconnect_delay_sec": 0, 00:23:24.580 "fast_io_fail_timeout_sec": 0, 00:23:24.580 "disable_auto_failback": false, 00:23:24.580 "generate_uuids": false, 00:23:24.580 "transport_tos": 0, 00:23:24.580 "nvme_error_stat": false, 00:23:24.580 "rdma_srq_size": 0, 00:23:24.580 "io_path_stat": false, 00:23:24.580 "allow_accel_sequence": false, 00:23:24.580 "rdma_max_cq_size": 0, 00:23:24.580 "rdma_cm_event_timeout_ms": 0, 00:23:24.580 "dhchap_digests": [ 00:23:24.580 "sha256", 00:23:24.580 "sha384", 00:23:24.580 "sha512" 00:23:24.580 ], 00:23:24.580 "dhchap_dhgroups": [ 00:23:24.580 "null", 00:23:24.580 "ffdhe2048", 00:23:24.580 "ffdhe3072", 00:23:24.580 "ffdhe4096", 00:23:24.580 "ffdhe6144", 00:23:24.580 "ffdhe8192" 00:23:24.580 ] 00:23:24.580 } 00:23:24.580 }, 00:23:24.580 { 00:23:24.580 "method": "bdev_nvme_attach_controller", 00:23:24.580 "params": { 00:23:24.580 "name": "nvme0", 00:23:24.580 "trtype": "TCP", 00:23:24.580 "adrfam": "IPv4", 00:23:24.580 "traddr": "10.0.0.2", 00:23:24.580 "trsvcid": "4420", 00:23:24.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.580 "prchk_reftag": false, 00:23:24.580 "prchk_guard": false, 00:23:24.580 "ctrlr_loss_timeout_sec": 0, 00:23:24.580 "reconnect_delay_sec": 0, 00:23:24.580 "fast_io_fail_timeout_sec": 0, 00:23:24.580 "psk": "key0", 00:23:24.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:24.580 "hdgst": false, 00:23:24.580 "ddgst": false, 00:23:24.580 "multipath": "multipath" 00:23:24.580 } 00:23:24.580 }, 00:23:24.580 { 00:23:24.580 "method": "bdev_nvme_set_hotplug", 00:23:24.580 "params": { 00:23:24.580 "period_us": 100000, 00:23:24.580 "enable": false 00:23:24.580 } 00:23:24.580 }, 00:23:24.580 { 00:23:24.580 "method": "bdev_enable_histogram", 00:23:24.580 "params": { 00:23:24.580 "name": "nvme0n1", 00:23:24.580 "enable": true 00:23:24.580 } 00:23:24.580 }, 00:23:24.580 { 00:23:24.580 "method": "bdev_wait_for_examine" 00:23:24.580 } 00:23:24.580 ] 00:23:24.580 }, 00:23:24.580 { 00:23:24.580 "subsystem": "nbd", 00:23:24.580 "config": [] 00:23:24.580 } 00:23:24.580 ] 00:23:24.580 }' 00:23:24.580 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 691438 00:23:24.580 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 691438 ']' 00:23:24.580 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 691438 00:23:24.580 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:24.580 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.580 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 691438 00:23:24.580 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:24.580 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:24.580 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 691438' 00:23:24.580 killing process with pid 691438 00:23:24.580 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 691438 00:23:24.580 Received shutdown signal, test time was about 1.000000 seconds 00:23:24.580 00:23:24.580 Latency(us) 00:23:24.580 [2024-12-05T12:55:07.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.580 [2024-12-05T12:55:07.167Z] =================================================================================================================== 00:23:24.580 [2024-12-05T12:55:07.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:24.580 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 691438 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 691334 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 691334 ']' 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 691334 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 691334 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 691334' 00:23:24.839 killing process with pid 691334 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 691334 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 691334 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.839 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:24.839 "subsystems": [ 00:23:24.839 { 00:23:24.839 "subsystem": "keyring", 00:23:24.839 "config": [ 00:23:24.839 { 00:23:24.839 "method": "keyring_file_add_key", 00:23:24.839 "params": { 00:23:24.839 "name": "key0", 00:23:24.839 "path": "/tmp/tmp.ajB8RVuSSK" 00:23:24.839 } 00:23:24.839 } 00:23:24.839 ] 00:23:24.839 }, 00:23:24.839 { 00:23:24.839 "subsystem": "iobuf", 00:23:24.839 "config": [ 00:23:24.839 { 00:23:24.839 "method": "iobuf_set_options", 00:23:24.839 "params": { 00:23:24.839 "small_pool_count": 8192, 00:23:24.839 "large_pool_count": 1024, 00:23:24.839 "small_bufsize": 8192, 00:23:24.839 "large_bufsize": 135168, 00:23:24.839 "enable_numa": false 00:23:24.839 } 00:23:24.839 } 00:23:24.839 ] 00:23:24.839 }, 00:23:24.839 { 00:23:24.839 "subsystem": "sock", 00:23:24.839 "config": [ 00:23:24.839 { 00:23:24.839 "method": "sock_set_default_impl", 00:23:24.839 "params": { 00:23:24.840 "impl_name": "posix" 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "sock_impl_set_options", 00:23:24.840 "params": { 00:23:24.840 "impl_name": "ssl", 00:23:24.840 "recv_buf_size": 4096, 00:23:24.840 "send_buf_size": 4096, 00:23:24.840 "enable_recv_pipe": true, 00:23:24.840 "enable_quickack": false, 00:23:24.840 "enable_placement_id": 0, 00:23:24.840 "enable_zerocopy_send_server": true, 00:23:24.840 "enable_zerocopy_send_client": false, 00:23:24.840 "zerocopy_threshold": 0, 00:23:24.840 "tls_version": 0, 00:23:24.840 "enable_ktls": false 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "sock_impl_set_options", 00:23:24.840 "params": { 00:23:24.840 "impl_name": "posix", 00:23:24.840 "recv_buf_size": 2097152, 00:23:24.840 "send_buf_size": 2097152, 00:23:24.840 "enable_recv_pipe": true, 00:23:24.840 "enable_quickack": false, 00:23:24.840 "enable_placement_id": 0, 00:23:24.840 "enable_zerocopy_send_server": true, 00:23:24.840 "enable_zerocopy_send_client": false, 00:23:24.840 "zerocopy_threshold": 0, 00:23:24.840 "tls_version": 0, 00:23:24.840 "enable_ktls": false 00:23:24.840 } 00:23:24.840 } 00:23:24.840 ] 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "subsystem": "vmd", 00:23:24.840 "config": [] 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "subsystem": "accel", 00:23:24.840 "config": [ 00:23:24.840 { 00:23:24.840 "method": "accel_set_options", 00:23:24.840 "params": { 00:23:24.840 "small_cache_size": 128, 00:23:24.840 "large_cache_size": 16, 00:23:24.840 "task_count": 2048, 00:23:24.840 "sequence_count": 2048, 00:23:24.840 "buf_count": 2048 00:23:24.840 } 00:23:24.840 } 00:23:24.840 ] 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "subsystem": "bdev", 00:23:24.840 "config": [ 00:23:24.840 { 00:23:24.840 "method": "bdev_set_options", 00:23:24.840 "params": { 00:23:24.840 "bdev_io_pool_size": 65535, 00:23:24.840 "bdev_io_cache_size": 256, 00:23:24.840 "bdev_auto_examine": true, 00:23:24.840 "iobuf_small_cache_size": 128, 00:23:24.840 "iobuf_large_cache_size": 16 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "bdev_raid_set_options", 00:23:24.840 "params": { 00:23:24.840 "process_window_size_kb": 1024, 00:23:24.840 "process_max_bandwidth_mb_sec": 0 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "bdev_iscsi_set_options", 00:23:24.840 "params": { 00:23:24.840 "timeout_sec": 30 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "bdev_nvme_set_options", 00:23:24.840 "params": { 00:23:24.840 "action_on_timeout": "none", 00:23:24.840 "timeout_us": 0, 00:23:24.840 "timeout_admin_us": 0, 00:23:24.840 "keep_alive_timeout_ms": 10000, 00:23:24.840 "arbitration_burst": 0, 00:23:24.840 "low_priority_weight": 0, 00:23:24.840 "medium_priority_weight": 0, 00:23:24.840 "high_priority_weight": 0, 00:23:24.840 "nvme_adminq_poll_period_us": 10000, 00:23:24.840 "nvme_ioq_poll_period_us": 0, 00:23:24.840 "io_queue_requests": 0, 00:23:24.840 "delay_cmd_submit": true, 00:23:24.840 "transport_retry_count": 4, 00:23:24.840 "bdev_retry_count": 3, 00:23:24.840 "transport_ack_timeout": 0, 00:23:24.840 "ctrlr_loss_timeout_sec": 0, 00:23:24.840 "reconnect_delay_sec": 0, 00:23:24.840 "fast_io_fail_timeout_sec": 0, 00:23:24.840 "disable_auto_failback": false, 00:23:24.840 "generate_uuids": false, 00:23:24.840 "transport_tos": 0, 00:23:24.840 "nvme_error_stat": false, 00:23:24.840 "rdma_srq_size": 0, 00:23:24.840 "io_path_stat": false, 00:23:24.840 "allow_accel_sequence": false, 00:23:24.840 "rdma_max_cq_size": 0, 00:23:24.840 "rdma_cm_event_timeout_ms": 0, 00:23:24.840 "dhchap_digests": [ 00:23:24.840 "sha256", 00:23:24.840 "sha384", 00:23:24.840 "sha512" 00:23:24.840 ], 00:23:24.840 "dhchap_dhgroups": [ 00:23:24.840 "null", 00:23:24.840 "ffdhe2048", 00:23:24.840 "ffdhe3072", 00:23:24.840 "ffdhe4096", 00:23:24.840 "ffdhe6144", 00:23:24.840 "ffdhe8192" 00:23:24.840 ] 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "bdev_nvme_set_hotplug", 00:23:24.840 "params": { 00:23:24.840 "period_us": 100000, 00:23:24.840 "enable": false 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "bdev_malloc_create", 00:23:24.840 "params": { 00:23:24.840 "name": "malloc0", 00:23:24.840 "num_blocks": 8192, 00:23:24.840 "block_size": 4096, 00:23:24.840 "physical_block_size": 4096, 00:23:24.840 "uuid": "642aa83a-995e-4287-a76b-b2b4dfe03287", 00:23:24.840 "optimal_io_boundary": 0, 00:23:24.840 "md_size": 0, 00:23:24.840 "dif_type": 0, 00:23:24.840 "dif_is_head_of_md": false, 00:23:24.840 "dif_pi_format": 0 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "bdev_wait_for_examine" 00:23:24.840 } 00:23:24.840 ] 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "subsystem": "nbd", 00:23:24.840 "config": [] 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "subsystem": "scheduler", 00:23:24.840 "config": [ 00:23:24.840 { 00:23:24.840 "method": "framework_set_scheduler", 00:23:24.840 "params": { 00:23:24.840 "name": "static" 00:23:24.840 } 00:23:24.840 } 00:23:24.840 ] 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "subsystem": "nvmf", 00:23:24.840 "config": [ 00:23:24.840 { 00:23:24.840 "method": "nvmf_set_config", 00:23:24.840 "params": { 00:23:24.840 "discovery_filter": "match_any", 00:23:24.840 "admin_cmd_passthru": { 00:23:24.840 "identify_ctrlr": false 00:23:24.840 }, 00:23:24.840 "dhchap_digests": [ 00:23:24.840 "sha256", 00:23:24.840 "sha384", 00:23:24.840 "sha512" 00:23:24.840 ], 00:23:24.840 "dhchap_dhgroups": [ 00:23:24.840 "null", 00:23:24.840 "ffdhe2048", 00:23:24.840 "ffdhe3072", 00:23:24.840 "ffdhe4096", 00:23:24.840 "ffdhe6144", 00:23:24.840 "ffdhe8192" 00:23:24.840 ] 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "nvmf_set_max_subsystems", 00:23:24.840 "params": { 00:23:24.840 "max_subsystems": 1024 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "nvmf_set_crdt", 00:23:24.840 "params": { 00:23:24.840 "crdt1": 0, 00:23:24.840 "crdt2": 0, 00:23:24.840 "crdt3": 0 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "nvmf_create_transport", 00:23:24.840 "params": { 00:23:24.840 "trtype": "TCP", 00:23:24.840 "max_queue_depth": 128, 00:23:24.840 "max_io_qpairs_per_ctrlr": 127, 00:23:24.840 "in_capsule_data_size": 4096, 00:23:24.840 "max_io_size": 131072, 00:23:24.840 "io_unit_size": 131072, 00:23:24.840 "max_aq_depth": 128, 00:23:24.840 "num_shared_buffers": 511, 00:23:24.840 "buf_cache_size": 4294967295, 00:23:24.840 "dif_insert_or_strip": false, 00:23:24.840 "zcopy": false, 00:23:24.840 "c2h_success": false, 00:23:24.840 "sock_priority": 0, 00:23:24.840 "abort_timeout_sec": 1, 00:23:24.840 "ack_timeout": 0, 00:23:24.840 "data_wr_pool_size": 0 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "nvmf_create_subsystem", 00:23:24.840 "params": { 00:23:24.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.840 "allow_any_host": false, 00:23:24.840 "serial_number": "00000000000000000000", 00:23:24.840 "model_number": "SPDK bdev Controller", 00:23:24.840 "max_namespaces": 32, 00:23:24.840 "min_cntlid": 1, 00:23:24.840 "max_cntlid": 65519, 00:23:24.840 "ana_reporting": false 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "nvmf_subsystem_add_host", 00:23:24.840 "params": { 00:23:24.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.840 "host": "nqn.2016-06.io.spdk:host1", 00:23:24.840 "psk": "key0" 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "nvmf_subsystem_add_ns", 00:23:24.840 "params": { 00:23:24.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.840 "namespace": { 00:23:24.840 "nsid": 1, 00:23:24.840 "bdev_name": "malloc0", 00:23:24.840 "nguid": "642AA83A995E4287A76BB2B4DFE03287", 00:23:24.840 "uuid": "642aa83a-995e-4287-a76b-b2b4dfe03287", 00:23:24.840 "no_auto_visible": false 00:23:24.840 } 00:23:24.840 } 00:23:24.840 }, 00:23:24.840 { 00:23:24.840 "method": "nvmf_subsystem_add_listener", 00:23:24.840 "params": { 00:23:24.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.840 "listen_address": { 00:23:24.840 "trtype": "TCP", 00:23:24.840 "adrfam": "IPv4", 00:23:24.840 "traddr": "10.0.0.2", 00:23:24.840 "trsvcid": "4420" 00:23:24.840 }, 00:23:24.840 "secure_channel": false, 00:23:24.840 "sock_impl": "ssl" 00:23:24.840 } 00:23:24.840 } 00:23:24.840 ] 00:23:24.840 } 00:23:24.840 ] 00:23:24.840 }' 00:23:24.841 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.841 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=691832 00:23:24.841 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:24.841 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 691832 00:23:24.841 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 691832 ']' 00:23:25.099 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.099 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.099 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.099 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.099 13:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.099 [2024-12-05 13:55:07.469239] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:25.100 [2024-12-05 13:55:07.469288] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.100 [2024-12-05 13:55:07.532596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.100 [2024-12-05 13:55:07.571776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.100 [2024-12-05 13:55:07.571819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.100 [2024-12-05 13:55:07.571827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.100 [2024-12-05 13:55:07.571832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.100 [2024-12-05 13:55:07.571856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.100 [2024-12-05 13:55:07.572458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.358 [2024-12-05 13:55:07.784589] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.358 [2024-12-05 13:55:07.816620] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.358 [2024-12-05 13:55:07.816860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=692074 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 692074 /var/tmp/bdevperf.sock 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 692074 ']' 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.925 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:25.925 "subsystems": [ 00:23:25.925 { 00:23:25.925 "subsystem": "keyring", 00:23:25.925 "config": [ 00:23:25.925 { 00:23:25.925 "method": "keyring_file_add_key", 00:23:25.925 "params": { 00:23:25.925 "name": "key0", 00:23:25.925 "path": "/tmp/tmp.ajB8RVuSSK" 00:23:25.925 } 00:23:25.925 } 00:23:25.925 ] 00:23:25.925 }, 00:23:25.925 { 00:23:25.925 "subsystem": "iobuf", 00:23:25.925 "config": [ 00:23:25.925 { 00:23:25.925 "method": "iobuf_set_options", 00:23:25.925 "params": { 00:23:25.925 "small_pool_count": 8192, 00:23:25.925 "large_pool_count": 1024, 00:23:25.925 "small_bufsize": 8192, 00:23:25.925 "large_bufsize": 135168, 00:23:25.925 "enable_numa": false 00:23:25.925 } 00:23:25.925 } 00:23:25.925 ] 00:23:25.925 }, 00:23:25.925 { 00:23:25.925 "subsystem": "sock", 00:23:25.925 "config": [ 00:23:25.925 { 00:23:25.925 "method": "sock_set_default_impl", 00:23:25.925 "params": { 00:23:25.925 "impl_name": "posix" 00:23:25.925 } 00:23:25.925 }, 00:23:25.925 { 00:23:25.925 "method": "sock_impl_set_options", 00:23:25.925 "params": { 00:23:25.925 "impl_name": "ssl", 00:23:25.925 "recv_buf_size": 4096, 00:23:25.925 "send_buf_size": 4096, 00:23:25.925 "enable_recv_pipe": true, 00:23:25.925 "enable_quickack": false, 00:23:25.925 "enable_placement_id": 0, 00:23:25.925 "enable_zerocopy_send_server": true, 00:23:25.925 "enable_zerocopy_send_client": false, 00:23:25.925 "zerocopy_threshold": 0, 00:23:25.925 "tls_version": 0, 00:23:25.925 "enable_ktls": false 00:23:25.925 } 00:23:25.925 }, 00:23:25.925 { 00:23:25.925 "method": "sock_impl_set_options", 00:23:25.925 "params": { 00:23:25.925 "impl_name": "posix", 00:23:25.925 "recv_buf_size": 2097152, 00:23:25.925 "send_buf_size": 2097152, 00:23:25.925 "enable_recv_pipe": true, 00:23:25.925 "enable_quickack": false, 00:23:25.925 "enable_placement_id": 0, 00:23:25.925 "enable_zerocopy_send_server": true, 00:23:25.926 "enable_zerocopy_send_client": false, 00:23:25.926 "zerocopy_threshold": 0, 00:23:25.926 "tls_version": 0, 00:23:25.926 "enable_ktls": false 00:23:25.926 } 00:23:25.926 } 00:23:25.926 ] 00:23:25.926 }, 00:23:25.926 { 00:23:25.926 "subsystem": "vmd", 00:23:25.926 "config": [] 00:23:25.926 }, 00:23:25.926 { 00:23:25.926 "subsystem": "accel", 00:23:25.926 "config": [ 00:23:25.926 { 00:23:25.926 "method": "accel_set_options", 00:23:25.926 "params": { 00:23:25.926 "small_cache_size": 128, 00:23:25.926 "large_cache_size": 16, 00:23:25.926 "task_count": 2048, 00:23:25.926 "sequence_count": 2048, 00:23:25.926 "buf_count": 2048 00:23:25.926 } 00:23:25.926 } 00:23:25.926 ] 00:23:25.926 }, 00:23:25.926 { 00:23:25.926 "subsystem": "bdev", 00:23:25.926 "config": [ 00:23:25.926 { 00:23:25.926 "method": "bdev_set_options", 00:23:25.926 "params": { 00:23:25.926 "bdev_io_pool_size": 65535, 00:23:25.926 "bdev_io_cache_size": 256, 00:23:25.926 "bdev_auto_examine": true, 00:23:25.926 "iobuf_small_cache_size": 128, 00:23:25.926 "iobuf_large_cache_size": 16 00:23:25.926 } 00:23:25.926 }, 00:23:25.926 { 00:23:25.926 "method": "bdev_raid_set_options", 00:23:25.926 "params": { 00:23:25.926 "process_window_size_kb": 1024, 00:23:25.926 "process_max_bandwidth_mb_sec": 0 00:23:25.926 } 00:23:25.926 }, 00:23:25.926 { 00:23:25.926 "method": "bdev_iscsi_set_options", 00:23:25.926 "params": { 00:23:25.926 "timeout_sec": 30 00:23:25.926 } 00:23:25.926 }, 00:23:25.926 { 00:23:25.926 "method": "bdev_nvme_set_options", 00:23:25.926 "params": { 00:23:25.926 "action_on_timeout": "none", 00:23:25.926 "timeout_us": 0, 00:23:25.926 "timeout_admin_us": 0, 00:23:25.926 "keep_alive_timeout_ms": 10000, 00:23:25.926 "arbitration_burst": 0, 00:23:25.926 "low_priority_weight": 0, 00:23:25.926 "medium_priority_weight": 0, 00:23:25.926 "high_priority_weight": 0, 00:23:25.926 "nvme_adminq_poll_period_us": 10000, 00:23:25.926 "nvme_ioq_poll_period_us": 0, 00:23:25.926 "io_queue_requests": 512, 00:23:25.926 "delay_cmd_submit": true, 00:23:25.926 "transport_retry_count": 4, 00:23:25.926 "bdev_retry_count": 3, 00:23:25.926 "transport_ack_timeout": 0, 00:23:25.926 "ctrlr_loss_timeout_sec": 0, 00:23:25.926 "reconnect_delay_sec": 0, 00:23:25.926 "fast_io_fail_timeout_sec": 0, 00:23:25.926 "disable_auto_failback": false, 00:23:25.926 "generate_uuids": false, 00:23:25.926 "transport_tos": 0, 00:23:25.926 "nvme_error_stat": false, 00:23:25.926 "rdma_srq_size": 0, 00:23:25.926 "io_path_stat": false, 00:23:25.926 "allow_accel_sequence": false, 00:23:25.926 "rdma_max_cq_size": 0, 00:23:25.926 "rdma_cm_event_timeout_ms": 0, 00:23:25.926 "dhchap_digests": [ 00:23:25.926 "sha256", 00:23:25.926 "sha384", 00:23:25.926 "sha512" 00:23:25.926 ], 00:23:25.926 "dhchap_dhgroups": [ 00:23:25.926 "null", 00:23:25.926 "ffdhe2048", 00:23:25.926 "ffdhe3072", 00:23:25.926 "ffdhe4096", 00:23:25.926 "ffdhe6144", 00:23:25.926 "ffdhe8192" 00:23:25.926 ] 00:23:25.926 } 00:23:25.926 }, 00:23:25.926 { 00:23:25.926 "method": "bdev_nvme_attach_controller", 00:23:25.926 "params": { 00:23:25.926 "name": "nvme0", 00:23:25.926 "trtype": "TCP", 00:23:25.926 "adrfam": "IPv4", 00:23:25.926 "traddr": "10.0.0.2", 00:23:25.926 "trsvcid": "4420", 00:23:25.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.926 "prchk_reftag": false, 00:23:25.926 "prchk_guard": false, 00:23:25.926 "ctrlr_loss_timeout_sec": 0, 00:23:25.926 "reconnect_delay_sec": 0, 00:23:25.926 "fast_io_fail_timeout_sec": 0, 00:23:25.926 "psk": "key0", 00:23:25.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.926 "hdgst": false, 00:23:25.926 "ddgst": false, 00:23:25.926 "multipath": "multipath" 00:23:25.926 } 00:23:25.926 }, 00:23:25.926 { 00:23:25.926 "method": "bdev_nvme_set_hotplug", 00:23:25.926 "params": { 00:23:25.926 "period_us": 100000, 00:23:25.926 "enable": false 00:23:25.926 } 00:23:25.926 }, 00:23:25.926 { 00:23:25.926 "method": "bdev_enable_histogram", 00:23:25.926 "params": { 00:23:25.926 "name": "nvme0n1", 00:23:25.926 "enable": true 00:23:25.926 } 00:23:25.926 }, 00:23:25.926 { 00:23:25.926 "method": "bdev_wait_for_examine" 00:23:25.926 } 00:23:25.926 ] 00:23:25.926 }, 00:23:25.926 { 00:23:25.926 "subsystem": "nbd", 00:23:25.926 "config": [] 00:23:25.926 } 00:23:25.926 ] 00:23:25.926 }' 00:23:25.926 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.926 13:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.926 [2024-12-05 13:55:08.416792] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:25.926 [2024-12-05 13:55:08.416836] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid692074 ] 00:23:25.926 [2024-12-05 13:55:08.488286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.185 [2024-12-05 13:55:08.530797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.185 [2024-12-05 13:55:08.684734] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.752 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.752 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:26.752 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.752 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:27.010 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.010 13:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.010 Running I/O for 1 seconds... 00:23:28.386 5372.00 IOPS, 20.98 MiB/s 00:23:28.386 Latency(us) 00:23:28.386 [2024-12-05T12:55:10.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.386 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:28.386 Verification LBA range: start 0x0 length 0x2000 00:23:28.386 nvme0n1 : 1.01 5424.62 21.19 0.00 0.00 23433.68 5804.62 33953.89 00:23:28.386 [2024-12-05T12:55:10.973Z] =================================================================================================================== 00:23:28.386 [2024-12-05T12:55:10.973Z] Total : 5424.62 21.19 0.00 0.00 23433.68 5804.62 33953.89 00:23:28.386 { 00:23:28.386 "results": [ 00:23:28.386 { 00:23:28.386 "job": "nvme0n1", 00:23:28.386 "core_mask": "0x2", 00:23:28.386 "workload": "verify", 00:23:28.386 "status": "finished", 00:23:28.386 "verify_range": { 00:23:28.386 "start": 0, 00:23:28.386 "length": 8192 00:23:28.386 }, 00:23:28.386 "queue_depth": 128, 00:23:28.386 "io_size": 4096, 00:23:28.386 "runtime": 1.013895, 00:23:28.386 "iops": 5424.624837877689, 00:23:28.386 "mibps": 21.189940772959723, 00:23:28.386 "io_failed": 0, 00:23:28.386 "io_timeout": 0, 00:23:28.386 "avg_latency_us": 23433.68198649351, 00:23:28.386 "min_latency_us": 5804.617142857142, 00:23:28.386 "max_latency_us": 33953.88952380952 00:23:28.386 } 00:23:28.386 ], 00:23:28.386 "core_count": 1 00:23:28.386 } 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:28.386 nvmf_trace.0 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 692074 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 692074 ']' 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 692074 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 692074 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 692074' 00:23:28.386 killing process with pid 692074 00:23:28.386 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 692074 00:23:28.387 Received shutdown signal, test time was about 1.000000 seconds 00:23:28.387 00:23:28.387 Latency(us) 00:23:28.387 [2024-12-05T12:55:10.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.387 [2024-12-05T12:55:10.974Z] =================================================================================================================== 00:23:28.387 [2024-12-05T12:55:10.974Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 692074 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.387 rmmod nvme_tcp 00:23:28.387 rmmod nvme_fabrics 00:23:28.387 rmmod nvme_keyring 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 691832 ']' 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 691832 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 691832 ']' 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 691832 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.387 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 691832 00:23:28.645 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.646 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.646 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 691832' 00:23:28.646 killing process with pid 691832 00:23:28.646 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 691832 00:23:28.646 13:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 691832 00:23:28.646 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.646 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.646 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.646 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:28.646 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:28.646 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.646 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.646 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.646 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.646 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.646 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.646 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.0nYKlOGXPB /tmp/tmp.f1y5NBTPak /tmp/tmp.ajB8RVuSSK 00:23:31.179 00:23:31.179 real 1m19.988s 00:23:31.179 user 2m2.159s 00:23:31.179 sys 0m30.377s 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.179 ************************************ 00:23:31.179 END TEST nvmf_tls 00:23:31.179 ************************************ 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:31.179 ************************************ 00:23:31.179 START TEST nvmf_fips 00:23:31.179 ************************************ 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:31.179 * Looking for test storage... 00:23:31.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:31.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.179 --rc genhtml_branch_coverage=1 00:23:31.179 --rc genhtml_function_coverage=1 00:23:31.179 --rc genhtml_legend=1 00:23:31.179 --rc geninfo_all_blocks=1 00:23:31.179 --rc geninfo_unexecuted_blocks=1 00:23:31.179 00:23:31.179 ' 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:31.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.179 --rc genhtml_branch_coverage=1 00:23:31.179 --rc genhtml_function_coverage=1 00:23:31.179 --rc genhtml_legend=1 00:23:31.179 --rc geninfo_all_blocks=1 00:23:31.179 --rc geninfo_unexecuted_blocks=1 00:23:31.179 00:23:31.179 ' 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:31.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.179 --rc genhtml_branch_coverage=1 00:23:31.179 --rc genhtml_function_coverage=1 00:23:31.179 --rc genhtml_legend=1 00:23:31.179 --rc geninfo_all_blocks=1 00:23:31.179 --rc geninfo_unexecuted_blocks=1 00:23:31.179 00:23:31.179 ' 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:31.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.179 --rc genhtml_branch_coverage=1 00:23:31.179 --rc genhtml_function_coverage=1 00:23:31.179 --rc genhtml_legend=1 00:23:31.179 --rc geninfo_all_blocks=1 00:23:31.179 --rc geninfo_unexecuted_blocks=1 00:23:31.179 00:23:31.179 ' 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.179 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:31.180 Error setting digest 00:23:31.180 40824F60057F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:31.180 40824F60057F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.180 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:37.744 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:37.745 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:37.745 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:37.745 Found net devices under 0000:86:00.0: cvl_0_0 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:37.745 Found net devices under 0000:86:00.1: cvl_0_1 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:37.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:23:37.745 00:23:37.745 --- 10.0.0.2 ping statistics --- 00:23:37.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.745 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:23:37.745 00:23:37.745 --- 10.0.0.1 ping statistics --- 00:23:37.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.745 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=696093 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 696093 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 696093 ']' 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.745 [2024-12-05 13:55:19.737645] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:37.745 [2024-12-05 13:55:19.737696] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.745 [2024-12-05 13:55:19.799622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.745 [2024-12-05 13:55:19.838079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.745 [2024-12-05 13:55:19.838114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.745 [2024-12-05 13:55:19.838121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.745 [2024-12-05 13:55:19.838127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.745 [2024-12-05 13:55:19.838134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.745 [2024-12-05 13:55:19.838671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:37.745 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.746 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.746 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.746 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.746 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:37.746 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:37.746 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:37.746 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.w3O 00:23:37.746 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:37.746 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.w3O 00:23:37.746 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.w3O 00:23:37.746 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.w3O 00:23:37.746 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:37.746 [2024-12-05 13:55:20.165943] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.746 [2024-12-05 13:55:20.181946] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.746 [2024-12-05 13:55:20.182151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.746 malloc0 00:23:37.746 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.746 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=696120 00:23:37.746 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:37.746 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 696120 /var/tmp/bdevperf.sock 00:23:37.746 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 696120 ']' 00:23:37.746 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.746 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.746 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.746 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.746 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.746 [2024-12-05 13:55:20.312587] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:37.746 [2024-12-05 13:55:20.312641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696120 ] 00:23:38.005 [2024-12-05 13:55:20.386470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.005 [2024-12-05 13:55:20.427199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.005 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.005 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:38.005 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.w3O 00:23:38.263 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.522 [2024-12-05 13:55:20.902771] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.522 TLSTESTn1 00:23:38.522 13:55:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.522 Running I/O for 10 seconds... 00:23:40.831 5507.00 IOPS, 21.51 MiB/s [2024-12-05T12:55:24.351Z] 5591.00 IOPS, 21.84 MiB/s [2024-12-05T12:55:25.307Z] 5615.67 IOPS, 21.94 MiB/s [2024-12-05T12:55:26.241Z] 5616.75 IOPS, 21.94 MiB/s [2024-12-05T12:55:27.175Z] 5598.00 IOPS, 21.87 MiB/s [2024-12-05T12:55:28.111Z] 5585.17 IOPS, 21.82 MiB/s [2024-12-05T12:55:29.490Z] 5502.00 IOPS, 21.49 MiB/s [2024-12-05T12:55:30.423Z] 5413.75 IOPS, 21.15 MiB/s [2024-12-05T12:55:31.354Z] 5347.67 IOPS, 20.89 MiB/s [2024-12-05T12:55:31.354Z] 5307.30 IOPS, 20.73 MiB/s 00:23:48.767 Latency(us) 00:23:48.767 [2024-12-05T12:55:31.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.767 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:48.767 Verification LBA range: start 0x0 length 0x2000 00:23:48.767 TLSTESTn1 : 10.02 5309.54 20.74 0.00 0.00 24070.34 5586.16 31706.94 00:23:48.767 [2024-12-05T12:55:31.354Z] =================================================================================================================== 00:23:48.767 [2024-12-05T12:55:31.354Z] Total : 5309.54 20.74 0.00 0.00 24070.34 5586.16 31706.94 00:23:48.767 { 00:23:48.767 "results": [ 00:23:48.767 { 00:23:48.767 "job": "TLSTESTn1", 00:23:48.767 "core_mask": "0x4", 00:23:48.767 "workload": "verify", 00:23:48.767 "status": "finished", 00:23:48.767 "verify_range": { 00:23:48.767 "start": 0, 00:23:48.767 "length": 8192 00:23:48.767 }, 00:23:48.767 "queue_depth": 128, 00:23:48.767 "io_size": 4096, 00:23:48.767 "runtime": 10.019316, 00:23:48.767 "iops": 5309.544084646098, 00:23:48.767 "mibps": 20.74040658064882, 00:23:48.767 "io_failed": 0, 00:23:48.767 "io_timeout": 0, 00:23:48.767 "avg_latency_us": 24070.344235640798, 00:23:48.767 "min_latency_us": 5586.1638095238095, 00:23:48.767 "max_latency_us": 31706.94095238095 00:23:48.767 } 00:23:48.767 ], 00:23:48.767 "core_count": 1 00:23:48.767 } 00:23:48.767 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:48.767 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:48.768 nvmf_trace.0 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 696120 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 696120 ']' 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 696120 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 696120 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 696120' 00:23:48.768 killing process with pid 696120 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 696120 00:23:48.768 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.768 00:23:48.768 Latency(us) 00:23:48.768 [2024-12-05T12:55:31.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.768 [2024-12-05T12:55:31.355Z] =================================================================================================================== 00:23:48.768 [2024-12-05T12:55:31.355Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.768 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 696120 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:49.026 rmmod nvme_tcp 00:23:49.026 rmmod nvme_fabrics 00:23:49.026 rmmod nvme_keyring 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 696093 ']' 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 696093 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 696093 ']' 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 696093 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 696093 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 696093' 00:23:49.026 killing process with pid 696093 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 696093 00:23:49.026 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 696093 00:23:49.284 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:49.284 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:49.284 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:49.284 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:49.284 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:23:49.284 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:49.284 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:23:49.284 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:49.284 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:49.284 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.284 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.284 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.815 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:51.815 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.w3O 00:23:51.815 00:23:51.815 real 0m20.495s 00:23:51.815 user 0m20.948s 00:23:51.815 sys 0m10.064s 00:23:51.815 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.815 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:51.815 ************************************ 00:23:51.815 END TEST nvmf_fips 00:23:51.815 ************************************ 00:23:51.815 13:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:51.815 13:55:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:51.815 13:55:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.815 13:55:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:51.815 ************************************ 00:23:51.815 START TEST nvmf_control_msg_list 00:23:51.816 ************************************ 00:23:51.816 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:51.816 * Looking for test storage... 00:23:51.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:51.816 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:51.816 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:23:51.816 13:55:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:51.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.816 --rc genhtml_branch_coverage=1 00:23:51.816 --rc genhtml_function_coverage=1 00:23:51.816 --rc genhtml_legend=1 00:23:51.816 --rc geninfo_all_blocks=1 00:23:51.816 --rc geninfo_unexecuted_blocks=1 00:23:51.816 00:23:51.816 ' 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:51.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.816 --rc genhtml_branch_coverage=1 00:23:51.816 --rc genhtml_function_coverage=1 00:23:51.816 --rc genhtml_legend=1 00:23:51.816 --rc geninfo_all_blocks=1 00:23:51.816 --rc geninfo_unexecuted_blocks=1 00:23:51.816 00:23:51.816 ' 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:51.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.816 --rc genhtml_branch_coverage=1 00:23:51.816 --rc genhtml_function_coverage=1 00:23:51.816 --rc genhtml_legend=1 00:23:51.816 --rc geninfo_all_blocks=1 00:23:51.816 --rc geninfo_unexecuted_blocks=1 00:23:51.816 00:23:51.816 ' 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:51.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.816 --rc genhtml_branch_coverage=1 00:23:51.816 --rc genhtml_function_coverage=1 00:23:51.816 --rc genhtml_legend=1 00:23:51.816 --rc geninfo_all_blocks=1 00:23:51.816 --rc geninfo_unexecuted_blocks=1 00:23:51.816 00:23:51.816 ' 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.816 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:51.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:51.817 13:55:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.390 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:58.391 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:58.391 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:58.391 Found net devices under 0000:86:00.0: cvl_0_0 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:58.391 Found net devices under 0000:86:00.1: cvl_0_1 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:58.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:23:58.391 00:23:58.391 --- 10.0.0.2 ping statistics --- 00:23:58.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.391 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:23:58.391 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:23:58.391 00:23:58.391 --- 10.0.0.1 ping statistics --- 00:23:58.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.391 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:23:58.391 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.391 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:23:58.391 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:58.391 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.391 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:58.391 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:58.391 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.391 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:58.391 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:58.391 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:58.391 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=701487 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 701487 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 701487 ']' 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:58.392 [2024-12-05 13:55:40.101075] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:58.392 [2024-12-05 13:55:40.101126] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.392 [2024-12-05 13:55:40.177514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.392 [2024-12-05 13:55:40.216865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.392 [2024-12-05 13:55:40.216900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.392 [2024-12-05 13:55:40.216907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.392 [2024-12-05 13:55:40.216914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.392 [2024-12-05 13:55:40.216920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.392 [2024-12-05 13:55:40.217466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:58.392 [2024-12-05 13:55:40.366197] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:58.392 Malloc0 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:58.392 [2024-12-05 13:55:40.406514] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=701513 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=701514 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=701516 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 701513 00:23:58.392 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:58.392 [2024-12-05 13:55:40.495133] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:58.392 [2024-12-05 13:55:40.495324] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:58.392 [2024-12-05 13:55:40.495486] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:58.958 Initializing NVMe Controllers 00:23:58.958 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:58.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:58.958 Initialization complete. Launching workers. 00:23:58.958 ======================================================== 00:23:58.958 Latency(us) 00:23:58.958 Device Information : IOPS MiB/s Average min max 00:23:58.958 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40977.58 40758.39 42012.35 00:23:58.958 ======================================================== 00:23:58.958 Total : 25.00 0.10 40977.58 40758.39 42012.35 00:23:58.958 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 701514 00:23:59.225 Initializing NVMe Controllers 00:23:59.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:59.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:59.225 Initialization complete. Launching workers. 00:23:59.225 ======================================================== 00:23:59.225 Latency(us) 00:23:59.225 Device Information : IOPS MiB/s Average min max 00:23:59.225 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6564.00 25.64 152.01 130.93 338.89 00:23:59.225 ======================================================== 00:23:59.225 Total : 6564.00 25.64 152.01 130.93 338.89 00:23:59.225 00:23:59.225 Initializing NVMe Controllers 00:23:59.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:59.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:59.225 Initialization complete. Launching workers. 00:23:59.225 ======================================================== 00:23:59.225 Latency(us) 00:23:59.225 Device Information : IOPS MiB/s Average min max 00:23:59.225 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6561.96 25.63 152.04 126.35 385.06 00:23:59.225 ======================================================== 00:23:59.225 Total : 6561.96 25.63 152.04 126.35 385.06 00:23:59.225 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 701516 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:59.225 rmmod nvme_tcp 00:23:59.225 rmmod nvme_fabrics 00:23:59.225 rmmod nvme_keyring 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 701487 ']' 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 701487 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 701487 ']' 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 701487 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 701487 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 701487' 00:23:59.225 killing process with pid 701487 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 701487 00:23:59.225 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 701487 00:23:59.485 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:59.485 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:59.485 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:59.485 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:59.485 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:23:59.485 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:59.485 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:23:59.485 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:59.485 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:59.485 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.485 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.485 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.522 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:01.522 00:24:01.522 real 0m10.028s 00:24:01.522 user 0m6.231s 00:24:01.522 sys 0m5.557s 00:24:01.522 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.522 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:01.522 ************************************ 00:24:01.522 END TEST nvmf_control_msg_list 00:24:01.522 ************************************ 00:24:01.522 13:55:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:01.522 13:55:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:01.522 13:55:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.522 13:55:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:01.522 ************************************ 00:24:01.522 START TEST nvmf_wait_for_buf 00:24:01.522 ************************************ 00:24:01.522 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:01.522 * Looking for test storage... 00:24:01.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:01.522 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:01.522 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:01.522 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.782 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:01.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.783 --rc genhtml_branch_coverage=1 00:24:01.783 --rc genhtml_function_coverage=1 00:24:01.783 --rc genhtml_legend=1 00:24:01.783 --rc geninfo_all_blocks=1 00:24:01.783 --rc geninfo_unexecuted_blocks=1 00:24:01.783 00:24:01.783 ' 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:01.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.783 --rc genhtml_branch_coverage=1 00:24:01.783 --rc genhtml_function_coverage=1 00:24:01.783 --rc genhtml_legend=1 00:24:01.783 --rc geninfo_all_blocks=1 00:24:01.783 --rc geninfo_unexecuted_blocks=1 00:24:01.783 00:24:01.783 ' 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:01.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.783 --rc genhtml_branch_coverage=1 00:24:01.783 --rc genhtml_function_coverage=1 00:24:01.783 --rc genhtml_legend=1 00:24:01.783 --rc geninfo_all_blocks=1 00:24:01.783 --rc geninfo_unexecuted_blocks=1 00:24:01.783 00:24:01.783 ' 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:01.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.783 --rc genhtml_branch_coverage=1 00:24:01.783 --rc genhtml_function_coverage=1 00:24:01.783 --rc genhtml_legend=1 00:24:01.783 --rc geninfo_all_blocks=1 00:24:01.783 --rc geninfo_unexecuted_blocks=1 00:24:01.783 00:24:01.783 ' 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:01.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:01.783 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:08.350 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:08.350 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:08.350 Found net devices under 0000:86:00.0: cvl_0_0 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:08.350 Found net devices under 0000:86:00.1: cvl_0_1 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:08.350 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.351 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.351 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.351 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.351 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:08.351 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:08.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:24:08.351 00:24:08.351 --- 10.0.0.2 ping statistics --- 00:24:08.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.351 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:24:08.351 00:24:08.351 --- 10.0.0.1 ping statistics --- 00:24:08.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.351 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=705267 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 705267 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 705267 ']' 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:08.351 [2024-12-05 13:55:50.173808] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:24:08.351 [2024-12-05 13:55:50.173850] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.351 [2024-12-05 13:55:50.252862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.351 [2024-12-05 13:55:50.293465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.351 [2024-12-05 13:55:50.293499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.351 [2024-12-05 13:55:50.293509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.351 [2024-12-05 13:55:50.293517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.351 [2024-12-05 13:55:50.293523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.351 [2024-12-05 13:55:50.294078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:08.351 Malloc0 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:08.351 [2024-12-05 13:55:50.469379] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:08.351 [2024-12-05 13:55:50.497576] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.351 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:08.351 [2024-12-05 13:55:50.581442] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:09.735 Initializing NVMe Controllers 00:24:09.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:09.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:09.735 Initialization complete. Launching workers. 00:24:09.735 ======================================================== 00:24:09.735 Latency(us) 00:24:09.735 Device Information : IOPS MiB/s Average min max 00:24:09.735 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 46.88 5.86 88314.42 30970.19 191538.76 00:24:09.735 ======================================================== 00:24:09.735 Total : 46.88 5.86 88314.42 30970.19 191538.76 00:24:09.735 00:24:09.735 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:09.735 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:09.735 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.735 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:09.735 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=726 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 726 -eq 0 ]] 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:09.735 rmmod nvme_tcp 00:24:09.735 rmmod nvme_fabrics 00:24:09.735 rmmod nvme_keyring 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 705267 ']' 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 705267 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 705267 ']' 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 705267 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 705267 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 705267' 00:24:09.735 killing process with pid 705267 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 705267 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 705267 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.735 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.267 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:12.267 00:24:12.267 real 0m10.369s 00:24:12.267 user 0m3.941s 00:24:12.267 sys 0m4.867s 00:24:12.267 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.267 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:12.267 ************************************ 00:24:12.267 END TEST nvmf_wait_for_buf 00:24:12.267 ************************************ 00:24:12.267 13:55:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:24:12.267 13:55:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:24:12.267 13:55:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:24:12.267 13:55:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:24:12.267 13:55:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:24:12.267 13:55:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:24:17.537 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:17.538 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:17.538 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:17.538 Found net devices under 0000:86:00.0: cvl_0_0 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:17.538 Found net devices under 0000:86:00.1: cvl_0_1 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:17.538 ************************************ 00:24:17.538 START TEST nvmf_perf_adq 00:24:17.538 ************************************ 00:24:17.538 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:17.797 * Looking for test storage... 00:24:17.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:17.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.797 --rc genhtml_branch_coverage=1 00:24:17.797 --rc genhtml_function_coverage=1 00:24:17.797 --rc genhtml_legend=1 00:24:17.797 --rc geninfo_all_blocks=1 00:24:17.797 --rc geninfo_unexecuted_blocks=1 00:24:17.797 00:24:17.797 ' 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:17.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.797 --rc genhtml_branch_coverage=1 00:24:17.797 --rc genhtml_function_coverage=1 00:24:17.797 --rc genhtml_legend=1 00:24:17.797 --rc geninfo_all_blocks=1 00:24:17.797 --rc geninfo_unexecuted_blocks=1 00:24:17.797 00:24:17.797 ' 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:17.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.797 --rc genhtml_branch_coverage=1 00:24:17.797 --rc genhtml_function_coverage=1 00:24:17.797 --rc genhtml_legend=1 00:24:17.797 --rc geninfo_all_blocks=1 00:24:17.797 --rc geninfo_unexecuted_blocks=1 00:24:17.797 00:24:17.797 ' 00:24:17.797 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:17.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.797 --rc genhtml_branch_coverage=1 00:24:17.797 --rc genhtml_function_coverage=1 00:24:17.797 --rc genhtml_legend=1 00:24:17.797 --rc geninfo_all_blocks=1 00:24:17.797 --rc geninfo_unexecuted_blocks=1 00:24:17.798 00:24:17.798 ' 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.798 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.366 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:24.367 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:24.367 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:24.367 Found net devices under 0000:86:00.0: cvl_0_0 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:24.367 Found net devices under 0000:86:00.1: cvl_0_1 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:24.367 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:24.625 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:26.528 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:31.814 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:24:31.814 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.814 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.814 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.814 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.814 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.814 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.814 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:31.815 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:31.815 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:31.815 Found net devices under 0000:86:00.0: cvl_0_0 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:31.815 Found net devices under 0000:86:00.1: cvl_0_1 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:31.815 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:31.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:24:31.815 00:24:31.815 --- 10.0.0.2 ping statistics --- 00:24:31.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.815 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:24:31.816 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:24:31.816 00:24:31.816 --- 10.0.0.1 ping statistics --- 00:24:31.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.816 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:31.816 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.816 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:31.816 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:31.816 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.816 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:31.816 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:31.816 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.816 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:31.816 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:32.075 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:32.075 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:32.075 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.075 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:32.075 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=713606 00:24:32.075 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 713606 00:24:32.075 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:32.075 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 713606 ']' 00:24:32.075 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.075 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.075 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.075 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.075 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:32.075 [2024-12-05 13:56:14.465549] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:24:32.075 [2024-12-05 13:56:14.465599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.075 [2024-12-05 13:56:14.545857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:32.075 [2024-12-05 13:56:14.589017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.075 [2024-12-05 13:56:14.589054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.075 [2024-12-05 13:56:14.589062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.075 [2024-12-05 13:56:14.589069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.075 [2024-12-05 13:56:14.589074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.075 [2024-12-05 13:56:14.590542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.075 [2024-12-05 13:56:14.590650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.075 [2024-12-05 13:56:14.590762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.075 [2024-12-05 13:56:14.590763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:33.027 [2024-12-05 13:56:15.502302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:33.027 Malloc1 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:33.027 [2024-12-05 13:56:15.565952] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=713859 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:24:33.027 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:35.553 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:24:35.553 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.553 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:35.553 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.553 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:24:35.553 "tick_rate": 2100000000, 00:24:35.553 "poll_groups": [ 00:24:35.553 { 00:24:35.553 "name": "nvmf_tgt_poll_group_000", 00:24:35.553 "admin_qpairs": 1, 00:24:35.553 "io_qpairs": 1, 00:24:35.553 "current_admin_qpairs": 1, 00:24:35.553 "current_io_qpairs": 1, 00:24:35.553 "pending_bdev_io": 0, 00:24:35.553 "completed_nvme_io": 19455, 00:24:35.553 "transports": [ 00:24:35.553 { 00:24:35.553 "trtype": "TCP" 00:24:35.553 } 00:24:35.553 ] 00:24:35.553 }, 00:24:35.553 { 00:24:35.553 "name": "nvmf_tgt_poll_group_001", 00:24:35.553 "admin_qpairs": 0, 00:24:35.553 "io_qpairs": 1, 00:24:35.553 "current_admin_qpairs": 0, 00:24:35.553 "current_io_qpairs": 1, 00:24:35.553 "pending_bdev_io": 0, 00:24:35.553 "completed_nvme_io": 19772, 00:24:35.553 "transports": [ 00:24:35.553 { 00:24:35.554 "trtype": "TCP" 00:24:35.554 } 00:24:35.554 ] 00:24:35.554 }, 00:24:35.554 { 00:24:35.554 "name": "nvmf_tgt_poll_group_002", 00:24:35.554 "admin_qpairs": 0, 00:24:35.554 "io_qpairs": 1, 00:24:35.554 "current_admin_qpairs": 0, 00:24:35.554 "current_io_qpairs": 1, 00:24:35.554 "pending_bdev_io": 0, 00:24:35.554 "completed_nvme_io": 19657, 00:24:35.554 "transports": [ 00:24:35.554 { 00:24:35.554 "trtype": "TCP" 00:24:35.554 } 00:24:35.554 ] 00:24:35.554 }, 00:24:35.554 { 00:24:35.554 "name": "nvmf_tgt_poll_group_003", 00:24:35.554 "admin_qpairs": 0, 00:24:35.554 "io_qpairs": 1, 00:24:35.554 "current_admin_qpairs": 0, 00:24:35.554 "current_io_qpairs": 1, 00:24:35.554 "pending_bdev_io": 0, 00:24:35.554 "completed_nvme_io": 19585, 00:24:35.554 "transports": [ 00:24:35.554 { 00:24:35.554 "trtype": "TCP" 00:24:35.554 } 00:24:35.554 ] 00:24:35.554 } 00:24:35.554 ] 00:24:35.554 }' 00:24:35.554 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:35.554 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:24:35.554 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:24:35.554 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:24:35.554 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 713859 00:24:43.687 Initializing NVMe Controllers 00:24:43.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:43.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:43.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:43.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:43.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:43.687 Initialization complete. Launching workers. 00:24:43.687 ======================================================== 00:24:43.687 Latency(us) 00:24:43.687 Device Information : IOPS MiB/s Average min max 00:24:43.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10489.90 40.98 6102.46 1973.94 10620.91 00:24:43.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10615.10 41.47 6029.57 2107.25 10470.81 00:24:43.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10383.20 40.56 6162.96 2193.85 10644.55 00:24:43.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10477.90 40.93 6108.81 2155.12 10434.79 00:24:43.687 ======================================================== 00:24:43.687 Total : 41966.09 163.93 6100.58 1973.94 10644.55 00:24:43.687 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.687 rmmod nvme_tcp 00:24:43.687 rmmod nvme_fabrics 00:24:43.687 rmmod nvme_keyring 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 713606 ']' 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 713606 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 713606 ']' 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 713606 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 713606 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 713606' 00:24:43.687 killing process with pid 713606 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 713606 00:24:43.687 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 713606 00:24:43.687 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.687 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.687 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.687 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:43.687 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:43.687 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.688 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.688 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.688 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.688 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.688 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.688 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.587 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.588 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:24:45.588 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:45.588 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:46.959 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:48.856 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:54.125 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:54.125 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:54.125 Found net devices under 0000:86:00.0: cvl_0_0 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:54.125 Found net devices under 0000:86:00.1: cvl_0_1 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:54.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:24:54.125 00:24:54.125 --- 10.0.0.2 ping statistics --- 00:24:54.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.125 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:24:54.125 00:24:54.125 --- 10.0.0.1 ping statistics --- 00:24:54.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.125 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:54.125 net.core.busy_poll = 1 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:54.125 net.core.busy_read = 1 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:54.125 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=717641 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 717641 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 717641 ']' 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.384 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.384 [2024-12-05 13:56:36.889523] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:24:54.384 [2024-12-05 13:56:36.889568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.384 [2024-12-05 13:56:36.967438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.643 [2024-12-05 13:56:37.009412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.643 [2024-12-05 13:56:37.009448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.643 [2024-12-05 13:56:37.009455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.643 [2024-12-05 13:56:37.009464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.643 [2024-12-05 13:56:37.009470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.643 [2024-12-05 13:56:37.011000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.643 [2024-12-05 13:56:37.011110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.643 [2024-12-05 13:56:37.011216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.643 [2024-12-05 13:56:37.011217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.643 [2024-12-05 13:56:37.204404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.643 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.901 Malloc1 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.901 [2024-12-05 13:56:37.269997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=717666 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:24:54.901 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:56.801 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:24:56.801 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.801 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:56.801 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.801 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:24:56.801 "tick_rate": 2100000000, 00:24:56.801 "poll_groups": [ 00:24:56.801 { 00:24:56.801 "name": "nvmf_tgt_poll_group_000", 00:24:56.801 "admin_qpairs": 1, 00:24:56.801 "io_qpairs": 1, 00:24:56.801 "current_admin_qpairs": 1, 00:24:56.801 "current_io_qpairs": 1, 00:24:56.801 "pending_bdev_io": 0, 00:24:56.801 "completed_nvme_io": 28203, 00:24:56.801 "transports": [ 00:24:56.801 { 00:24:56.801 "trtype": "TCP" 00:24:56.801 } 00:24:56.801 ] 00:24:56.801 }, 00:24:56.801 { 00:24:56.801 "name": "nvmf_tgt_poll_group_001", 00:24:56.801 "admin_qpairs": 0, 00:24:56.801 "io_qpairs": 3, 00:24:56.801 "current_admin_qpairs": 0, 00:24:56.801 "current_io_qpairs": 3, 00:24:56.801 "pending_bdev_io": 0, 00:24:56.801 "completed_nvme_io": 30720, 00:24:56.801 "transports": [ 00:24:56.801 { 00:24:56.801 "trtype": "TCP" 00:24:56.801 } 00:24:56.801 ] 00:24:56.801 }, 00:24:56.801 { 00:24:56.801 "name": "nvmf_tgt_poll_group_002", 00:24:56.801 "admin_qpairs": 0, 00:24:56.801 "io_qpairs": 0, 00:24:56.801 "current_admin_qpairs": 0, 00:24:56.801 "current_io_qpairs": 0, 00:24:56.801 "pending_bdev_io": 0, 00:24:56.801 "completed_nvme_io": 0, 00:24:56.801 "transports": [ 00:24:56.801 { 00:24:56.801 "trtype": "TCP" 00:24:56.801 } 00:24:56.801 ] 00:24:56.801 }, 00:24:56.801 { 00:24:56.801 "name": "nvmf_tgt_poll_group_003", 00:24:56.801 "admin_qpairs": 0, 00:24:56.801 "io_qpairs": 0, 00:24:56.801 "current_admin_qpairs": 0, 00:24:56.801 "current_io_qpairs": 0, 00:24:56.801 "pending_bdev_io": 0, 00:24:56.801 "completed_nvme_io": 0, 00:24:56.801 "transports": [ 00:24:56.801 { 00:24:56.801 "trtype": "TCP" 00:24:56.801 } 00:24:56.801 ] 00:24:56.801 } 00:24:56.801 ] 00:24:56.801 }' 00:24:56.801 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:56.801 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:24:56.801 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:24:56.801 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:24:56.801 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 717666 00:25:04.940 Initializing NVMe Controllers 00:25:04.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:04.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:04.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:04.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:04.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:04.940 Initialization complete. Launching workers. 00:25:04.940 ======================================================== 00:25:04.940 Latency(us) 00:25:04.940 Device Information : IOPS MiB/s Average min max 00:25:04.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5580.90 21.80 11467.68 1464.66 57110.54 00:25:04.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14167.50 55.34 4516.81 1488.88 46027.98 00:25:04.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4909.80 19.18 13034.93 991.90 62563.32 00:25:04.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5698.30 22.26 11230.19 1489.10 58305.38 00:25:04.940 ======================================================== 00:25:04.940 Total : 30356.50 118.58 8432.58 991.90 62563.32 00:25:04.940 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:04.940 rmmod nvme_tcp 00:25:04.940 rmmod nvme_fabrics 00:25:04.940 rmmod nvme_keyring 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 717641 ']' 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 717641 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 717641 ']' 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 717641 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 717641 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 717641' 00:25:04.940 killing process with pid 717641 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 717641 00:25:04.940 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 717641 00:25:05.199 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:05.199 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:05.199 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:05.199 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:05.199 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:05.199 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:05.199 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:05.199 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:05.199 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:05.199 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.199 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.199 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.490 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:08.490 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:25:08.490 00:25:08.490 real 0m50.714s 00:25:08.490 user 2m46.453s 00:25:08.490 sys 0m10.389s 00:25:08.490 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.490 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:08.490 ************************************ 00:25:08.490 END TEST nvmf_perf_adq 00:25:08.490 ************************************ 00:25:08.490 13:56:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:08.490 13:56:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:08.490 13:56:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.490 13:56:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:08.490 ************************************ 00:25:08.490 START TEST nvmf_shutdown 00:25:08.490 ************************************ 00:25:08.490 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:08.490 * Looking for test storage... 00:25:08.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:08.490 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:08.490 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:08.490 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:08.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.490 --rc genhtml_branch_coverage=1 00:25:08.490 --rc genhtml_function_coverage=1 00:25:08.490 --rc genhtml_legend=1 00:25:08.490 --rc geninfo_all_blocks=1 00:25:08.490 --rc geninfo_unexecuted_blocks=1 00:25:08.490 00:25:08.490 ' 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:08.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.490 --rc genhtml_branch_coverage=1 00:25:08.490 --rc genhtml_function_coverage=1 00:25:08.490 --rc genhtml_legend=1 00:25:08.490 --rc geninfo_all_blocks=1 00:25:08.490 --rc geninfo_unexecuted_blocks=1 00:25:08.490 00:25:08.490 ' 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:08.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.490 --rc genhtml_branch_coverage=1 00:25:08.490 --rc genhtml_function_coverage=1 00:25:08.490 --rc genhtml_legend=1 00:25:08.490 --rc geninfo_all_blocks=1 00:25:08.490 --rc geninfo_unexecuted_blocks=1 00:25:08.490 00:25:08.490 ' 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:08.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.490 --rc genhtml_branch_coverage=1 00:25:08.490 --rc genhtml_function_coverage=1 00:25:08.490 --rc genhtml_legend=1 00:25:08.490 --rc geninfo_all_blocks=1 00:25:08.490 --rc geninfo_unexecuted_blocks=1 00:25:08.490 00:25:08.490 ' 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.490 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:08.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.491 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:08.751 ************************************ 00:25:08.751 START TEST nvmf_shutdown_tc1 00:25:08.751 ************************************ 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:08.751 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:15.322 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:15.322 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:15.322 Found net devices under 0000:86:00.0: cvl_0_0 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:15.322 Found net devices under 0000:86:00.1: cvl_0_1 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.322 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:15.323 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:15.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:25:15.323 00:25:15.323 --- 10.0.0.2 ping statistics --- 00:25:15.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.323 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:25:15.323 00:25:15.323 --- 10.0.0.1 ping statistics --- 00:25:15.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.323 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=723121 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 723121 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 723121 ']' 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:15.323 [2024-12-05 13:56:57.202393] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:15.323 [2024-12-05 13:56:57.202439] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.323 [2024-12-05 13:56:57.280417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:15.323 [2024-12-05 13:56:57.322268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.323 [2024-12-05 13:56:57.322305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.323 [2024-12-05 13:56:57.322312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.323 [2024-12-05 13:56:57.322319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.323 [2024-12-05 13:56:57.322324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.323 [2024-12-05 13:56:57.323951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.323 [2024-12-05 13:56:57.324060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:15.323 [2024-12-05 13:56:57.324166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.323 [2024-12-05 13:56:57.324166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:15.323 [2024-12-05 13:56:57.462088] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:15.323 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:15.324 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:15.324 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:15.324 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:15.324 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.324 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:15.324 Malloc1 00:25:15.324 [2024-12-05 13:56:57.572184] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.324 Malloc2 00:25:15.324 Malloc3 00:25:15.324 Malloc4 00:25:15.324 Malloc5 00:25:15.324 Malloc6 00:25:15.324 Malloc7 00:25:15.324 Malloc8 00:25:15.324 Malloc9 00:25:15.584 Malloc10 00:25:15.584 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.584 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:15.584 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:15.584 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:15.584 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=723294 00:25:15.584 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 723294 /var/tmp/bdevperf.sock 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 723294 ']' 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:15.584 { 00:25:15.584 "params": { 00:25:15.584 "name": "Nvme$subsystem", 00:25:15.584 "trtype": "$TEST_TRANSPORT", 00:25:15.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.584 "adrfam": "ipv4", 00:25:15.584 "trsvcid": "$NVMF_PORT", 00:25:15.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.584 "hdgst": ${hdgst:-false}, 00:25:15.584 "ddgst": ${ddgst:-false} 00:25:15.584 }, 00:25:15.584 "method": "bdev_nvme_attach_controller" 00:25:15.584 } 00:25:15.584 EOF 00:25:15.584 )") 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:15.584 { 00:25:15.584 "params": { 00:25:15.584 "name": "Nvme$subsystem", 00:25:15.584 "trtype": "$TEST_TRANSPORT", 00:25:15.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.584 "adrfam": "ipv4", 00:25:15.584 "trsvcid": "$NVMF_PORT", 00:25:15.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.584 "hdgst": ${hdgst:-false}, 00:25:15.584 "ddgst": ${ddgst:-false} 00:25:15.584 }, 00:25:15.584 "method": "bdev_nvme_attach_controller" 00:25:15.584 } 00:25:15.584 EOF 00:25:15.584 )") 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:15.584 { 00:25:15.584 "params": { 00:25:15.584 "name": "Nvme$subsystem", 00:25:15.584 "trtype": "$TEST_TRANSPORT", 00:25:15.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.584 "adrfam": "ipv4", 00:25:15.584 "trsvcid": "$NVMF_PORT", 00:25:15.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.584 "hdgst": ${hdgst:-false}, 00:25:15.584 "ddgst": ${ddgst:-false} 00:25:15.584 }, 00:25:15.584 "method": "bdev_nvme_attach_controller" 00:25:15.584 } 00:25:15.584 EOF 00:25:15.584 )") 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:15.584 { 00:25:15.584 "params": { 00:25:15.584 "name": "Nvme$subsystem", 00:25:15.584 "trtype": "$TEST_TRANSPORT", 00:25:15.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.584 "adrfam": "ipv4", 00:25:15.584 "trsvcid": "$NVMF_PORT", 00:25:15.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.584 "hdgst": ${hdgst:-false}, 00:25:15.584 "ddgst": ${ddgst:-false} 00:25:15.584 }, 00:25:15.584 "method": "bdev_nvme_attach_controller" 00:25:15.584 } 00:25:15.584 EOF 00:25:15.584 )") 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:15.584 { 00:25:15.584 "params": { 00:25:15.584 "name": "Nvme$subsystem", 00:25:15.584 "trtype": "$TEST_TRANSPORT", 00:25:15.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.584 "adrfam": "ipv4", 00:25:15.584 "trsvcid": "$NVMF_PORT", 00:25:15.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.584 "hdgst": ${hdgst:-false}, 00:25:15.584 "ddgst": ${ddgst:-false} 00:25:15.584 }, 00:25:15.584 "method": "bdev_nvme_attach_controller" 00:25:15.584 } 00:25:15.584 EOF 00:25:15.584 )") 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:15.584 { 00:25:15.584 "params": { 00:25:15.584 "name": "Nvme$subsystem", 00:25:15.584 "trtype": "$TEST_TRANSPORT", 00:25:15.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.584 "adrfam": "ipv4", 00:25:15.584 "trsvcid": "$NVMF_PORT", 00:25:15.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.584 "hdgst": ${hdgst:-false}, 00:25:15.584 "ddgst": ${ddgst:-false} 00:25:15.584 }, 00:25:15.584 "method": "bdev_nvme_attach_controller" 00:25:15.584 } 00:25:15.584 EOF 00:25:15.584 )") 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:15.584 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:15.584 { 00:25:15.584 "params": { 00:25:15.584 "name": "Nvme$subsystem", 00:25:15.584 "trtype": "$TEST_TRANSPORT", 00:25:15.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "$NVMF_PORT", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.585 "hdgst": ${hdgst:-false}, 00:25:15.585 "ddgst": ${ddgst:-false} 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 } 00:25:15.585 EOF 00:25:15.585 )") 00:25:15.585 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:15.585 [2024-12-05 13:56:58.047167] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:15.585 [2024-12-05 13:56:58.047219] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:15.585 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:15.585 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:15.585 { 00:25:15.585 "params": { 00:25:15.585 "name": "Nvme$subsystem", 00:25:15.585 "trtype": "$TEST_TRANSPORT", 00:25:15.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "$NVMF_PORT", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.585 "hdgst": ${hdgst:-false}, 00:25:15.585 "ddgst": ${ddgst:-false} 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 } 00:25:15.585 EOF 00:25:15.585 )") 00:25:15.585 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:15.585 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:15.585 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:15.585 { 00:25:15.585 "params": { 00:25:15.585 "name": "Nvme$subsystem", 00:25:15.585 "trtype": "$TEST_TRANSPORT", 00:25:15.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "$NVMF_PORT", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.585 "hdgst": ${hdgst:-false}, 00:25:15.585 "ddgst": ${ddgst:-false} 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 } 00:25:15.585 EOF 00:25:15.585 )") 00:25:15.585 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:15.585 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:15.585 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:15.585 { 00:25:15.585 "params": { 00:25:15.585 "name": "Nvme$subsystem", 00:25:15.585 "trtype": "$TEST_TRANSPORT", 00:25:15.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "$NVMF_PORT", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.585 "hdgst": ${hdgst:-false}, 00:25:15.585 "ddgst": ${ddgst:-false} 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 } 00:25:15.585 EOF 00:25:15.585 )") 00:25:15.585 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:15.585 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:15.585 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:15.585 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:15.585 "params": { 00:25:15.585 "name": "Nvme1", 00:25:15.585 "trtype": "tcp", 00:25:15.585 "traddr": "10.0.0.2", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "4420", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:15.585 "hdgst": false, 00:25:15.585 "ddgst": false 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 },{ 00:25:15.585 "params": { 00:25:15.585 "name": "Nvme2", 00:25:15.585 "trtype": "tcp", 00:25:15.585 "traddr": "10.0.0.2", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "4420", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:15.585 "hdgst": false, 00:25:15.585 "ddgst": false 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 },{ 00:25:15.585 "params": { 00:25:15.585 "name": "Nvme3", 00:25:15.585 "trtype": "tcp", 00:25:15.585 "traddr": "10.0.0.2", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "4420", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:15.585 "hdgst": false, 00:25:15.585 "ddgst": false 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 },{ 00:25:15.585 "params": { 00:25:15.585 "name": "Nvme4", 00:25:15.585 "trtype": "tcp", 00:25:15.585 "traddr": "10.0.0.2", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "4420", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:15.585 "hdgst": false, 00:25:15.585 "ddgst": false 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 },{ 00:25:15.585 "params": { 00:25:15.585 "name": "Nvme5", 00:25:15.585 "trtype": "tcp", 00:25:15.585 "traddr": "10.0.0.2", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "4420", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:15.585 "hdgst": false, 00:25:15.585 "ddgst": false 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 },{ 00:25:15.585 "params": { 00:25:15.585 "name": "Nvme6", 00:25:15.585 "trtype": "tcp", 00:25:15.585 "traddr": "10.0.0.2", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "4420", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:15.585 "hdgst": false, 00:25:15.585 "ddgst": false 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 },{ 00:25:15.585 "params": { 00:25:15.585 "name": "Nvme7", 00:25:15.585 "trtype": "tcp", 00:25:15.585 "traddr": "10.0.0.2", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "4420", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:15.585 "hdgst": false, 00:25:15.585 "ddgst": false 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 },{ 00:25:15.585 "params": { 00:25:15.585 "name": "Nvme8", 00:25:15.585 "trtype": "tcp", 00:25:15.585 "traddr": "10.0.0.2", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "4420", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:15.585 "hdgst": false, 00:25:15.585 "ddgst": false 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 },{ 00:25:15.585 "params": { 00:25:15.585 "name": "Nvme9", 00:25:15.585 "trtype": "tcp", 00:25:15.585 "traddr": "10.0.0.2", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "4420", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:15.585 "hdgst": false, 00:25:15.585 "ddgst": false 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 },{ 00:25:15.585 "params": { 00:25:15.585 "name": "Nvme10", 00:25:15.585 "trtype": "tcp", 00:25:15.585 "traddr": "10.0.0.2", 00:25:15.585 "adrfam": "ipv4", 00:25:15.585 "trsvcid": "4420", 00:25:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:15.585 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:15.585 "hdgst": false, 00:25:15.585 "ddgst": false 00:25:15.585 }, 00:25:15.585 "method": "bdev_nvme_attach_controller" 00:25:15.585 }' 00:25:15.585 [2024-12-05 13:56:58.128339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.844 [2024-12-05 13:56:58.169357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.749 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.749 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:17.749 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:17.749 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.749 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:17.749 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.749 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 723294 00:25:17.749 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:25:17.749 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:25:18.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 723294 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 723121 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:18.687 { 00:25:18.687 "params": { 00:25:18.687 "name": "Nvme$subsystem", 00:25:18.687 "trtype": "$TEST_TRANSPORT", 00:25:18.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.687 "adrfam": "ipv4", 00:25:18.687 "trsvcid": "$NVMF_PORT", 00:25:18.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.687 "hdgst": ${hdgst:-false}, 00:25:18.687 "ddgst": ${ddgst:-false} 00:25:18.687 }, 00:25:18.687 "method": "bdev_nvme_attach_controller" 00:25:18.687 } 00:25:18.687 EOF 00:25:18.687 )") 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:18.687 { 00:25:18.687 "params": { 00:25:18.687 "name": "Nvme$subsystem", 00:25:18.687 "trtype": "$TEST_TRANSPORT", 00:25:18.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.687 "adrfam": "ipv4", 00:25:18.687 "trsvcid": "$NVMF_PORT", 00:25:18.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.687 "hdgst": ${hdgst:-false}, 00:25:18.687 "ddgst": ${ddgst:-false} 00:25:18.687 }, 00:25:18.687 "method": "bdev_nvme_attach_controller" 00:25:18.687 } 00:25:18.687 EOF 00:25:18.687 )") 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:18.687 { 00:25:18.687 "params": { 00:25:18.687 "name": "Nvme$subsystem", 00:25:18.687 "trtype": "$TEST_TRANSPORT", 00:25:18.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.687 "adrfam": "ipv4", 00:25:18.687 "trsvcid": "$NVMF_PORT", 00:25:18.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.687 "hdgst": ${hdgst:-false}, 00:25:18.687 "ddgst": ${ddgst:-false} 00:25:18.687 }, 00:25:18.687 "method": "bdev_nvme_attach_controller" 00:25:18.687 } 00:25:18.687 EOF 00:25:18.687 )") 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:18.687 { 00:25:18.687 "params": { 00:25:18.687 "name": "Nvme$subsystem", 00:25:18.687 "trtype": "$TEST_TRANSPORT", 00:25:18.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.687 "adrfam": "ipv4", 00:25:18.687 "trsvcid": "$NVMF_PORT", 00:25:18.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.687 "hdgst": ${hdgst:-false}, 00:25:18.687 "ddgst": ${ddgst:-false} 00:25:18.687 }, 00:25:18.687 "method": "bdev_nvme_attach_controller" 00:25:18.687 } 00:25:18.687 EOF 00:25:18.687 )") 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:18.687 { 00:25:18.687 "params": { 00:25:18.687 "name": "Nvme$subsystem", 00:25:18.687 "trtype": "$TEST_TRANSPORT", 00:25:18.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.687 "adrfam": "ipv4", 00:25:18.687 "trsvcid": "$NVMF_PORT", 00:25:18.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.687 "hdgst": ${hdgst:-false}, 00:25:18.687 "ddgst": ${ddgst:-false} 00:25:18.687 }, 00:25:18.687 "method": "bdev_nvme_attach_controller" 00:25:18.687 } 00:25:18.687 EOF 00:25:18.687 )") 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:18.687 { 00:25:18.687 "params": { 00:25:18.687 "name": "Nvme$subsystem", 00:25:18.687 "trtype": "$TEST_TRANSPORT", 00:25:18.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.687 "adrfam": "ipv4", 00:25:18.687 "trsvcid": "$NVMF_PORT", 00:25:18.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.687 "hdgst": ${hdgst:-false}, 00:25:18.687 "ddgst": ${ddgst:-false} 00:25:18.687 }, 00:25:18.687 "method": "bdev_nvme_attach_controller" 00:25:18.687 } 00:25:18.687 EOF 00:25:18.687 )") 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:18.687 { 00:25:18.687 "params": { 00:25:18.687 "name": "Nvme$subsystem", 00:25:18.687 "trtype": "$TEST_TRANSPORT", 00:25:18.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.687 "adrfam": "ipv4", 00:25:18.687 "trsvcid": "$NVMF_PORT", 00:25:18.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.687 "hdgst": ${hdgst:-false}, 00:25:18.687 "ddgst": ${ddgst:-false} 00:25:18.687 }, 00:25:18.687 "method": "bdev_nvme_attach_controller" 00:25:18.687 } 00:25:18.687 EOF 00:25:18.687 )") 00:25:18.687 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:18.687 [2024-12-05 13:57:00.974405] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:18.687 [2024-12-05 13:57:00.974461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723895 ] 00:25:18.688 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:18.688 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:18.688 { 00:25:18.688 "params": { 00:25:18.688 "name": "Nvme$subsystem", 00:25:18.688 "trtype": "$TEST_TRANSPORT", 00:25:18.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.688 "adrfam": "ipv4", 00:25:18.688 "trsvcid": "$NVMF_PORT", 00:25:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.688 "hdgst": ${hdgst:-false}, 00:25:18.688 "ddgst": ${ddgst:-false} 00:25:18.688 }, 00:25:18.688 "method": "bdev_nvme_attach_controller" 00:25:18.688 } 00:25:18.688 EOF 00:25:18.688 )") 00:25:18.688 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:18.688 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:18.688 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:18.688 { 00:25:18.688 "params": { 00:25:18.688 "name": "Nvme$subsystem", 00:25:18.688 "trtype": "$TEST_TRANSPORT", 00:25:18.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.688 "adrfam": "ipv4", 00:25:18.688 "trsvcid": "$NVMF_PORT", 00:25:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.688 "hdgst": ${hdgst:-false}, 00:25:18.688 "ddgst": ${ddgst:-false} 00:25:18.688 }, 00:25:18.688 "method": "bdev_nvme_attach_controller" 00:25:18.688 } 00:25:18.688 EOF 00:25:18.688 )") 00:25:18.688 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:18.688 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:18.688 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:18.688 { 00:25:18.688 "params": { 00:25:18.688 "name": "Nvme$subsystem", 00:25:18.688 "trtype": "$TEST_TRANSPORT", 00:25:18.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.688 "adrfam": "ipv4", 00:25:18.688 "trsvcid": "$NVMF_PORT", 00:25:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.688 "hdgst": ${hdgst:-false}, 00:25:18.688 "ddgst": ${ddgst:-false} 00:25:18.688 }, 00:25:18.688 "method": "bdev_nvme_attach_controller" 00:25:18.688 } 00:25:18.688 EOF 00:25:18.688 )") 00:25:18.688 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:18.688 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:18.688 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:18.688 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:18.688 "params": { 00:25:18.688 "name": "Nvme1", 00:25:18.688 "trtype": "tcp", 00:25:18.688 "traddr": "10.0.0.2", 00:25:18.688 "adrfam": "ipv4", 00:25:18.688 "trsvcid": "4420", 00:25:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:18.688 "hdgst": false, 00:25:18.688 "ddgst": false 00:25:18.688 }, 00:25:18.688 "method": "bdev_nvme_attach_controller" 00:25:18.688 },{ 00:25:18.688 "params": { 00:25:18.688 "name": "Nvme2", 00:25:18.688 "trtype": "tcp", 00:25:18.688 "traddr": "10.0.0.2", 00:25:18.688 "adrfam": "ipv4", 00:25:18.688 "trsvcid": "4420", 00:25:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:18.688 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:18.688 "hdgst": false, 00:25:18.688 "ddgst": false 00:25:18.688 }, 00:25:18.688 "method": "bdev_nvme_attach_controller" 00:25:18.688 },{ 00:25:18.688 "params": { 00:25:18.688 "name": "Nvme3", 00:25:18.688 "trtype": "tcp", 00:25:18.688 "traddr": "10.0.0.2", 00:25:18.688 "adrfam": "ipv4", 00:25:18.688 "trsvcid": "4420", 00:25:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:18.688 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:18.688 "hdgst": false, 00:25:18.688 "ddgst": false 00:25:18.688 }, 00:25:18.688 "method": "bdev_nvme_attach_controller" 00:25:18.688 },{ 00:25:18.688 "params": { 00:25:18.688 "name": "Nvme4", 00:25:18.688 "trtype": "tcp", 00:25:18.688 "traddr": "10.0.0.2", 00:25:18.688 "adrfam": "ipv4", 00:25:18.688 "trsvcid": "4420", 00:25:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:18.688 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:18.688 "hdgst": false, 00:25:18.688 "ddgst": false 00:25:18.688 }, 00:25:18.688 "method": "bdev_nvme_attach_controller" 00:25:18.688 },{ 00:25:18.688 "params": { 00:25:18.688 "name": "Nvme5", 00:25:18.688 "trtype": "tcp", 00:25:18.688 "traddr": "10.0.0.2", 00:25:18.688 "adrfam": "ipv4", 00:25:18.688 "trsvcid": "4420", 00:25:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:18.688 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:18.688 "hdgst": false, 00:25:18.688 "ddgst": false 00:25:18.688 }, 00:25:18.688 "method": "bdev_nvme_attach_controller" 00:25:18.688 },{ 00:25:18.688 "params": { 00:25:18.688 "name": "Nvme6", 00:25:18.688 "trtype": "tcp", 00:25:18.688 "traddr": "10.0.0.2", 00:25:18.688 "adrfam": "ipv4", 00:25:18.688 "trsvcid": "4420", 00:25:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:18.688 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:18.688 "hdgst": false, 00:25:18.688 "ddgst": false 00:25:18.688 }, 00:25:18.688 "method": "bdev_nvme_attach_controller" 00:25:18.688 },{ 00:25:18.688 "params": { 00:25:18.688 "name": "Nvme7", 00:25:18.688 "trtype": "tcp", 00:25:18.688 "traddr": "10.0.0.2", 00:25:18.688 "adrfam": "ipv4", 00:25:18.688 "trsvcid": "4420", 00:25:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:18.688 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:18.688 "hdgst": false, 00:25:18.688 "ddgst": false 00:25:18.688 }, 00:25:18.688 "method": "bdev_nvme_attach_controller" 00:25:18.688 },{ 00:25:18.688 "params": { 00:25:18.688 "name": "Nvme8", 00:25:18.688 "trtype": "tcp", 00:25:18.688 "traddr": "10.0.0.2", 00:25:18.688 "adrfam": "ipv4", 00:25:18.688 "trsvcid": "4420", 00:25:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:18.688 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:18.688 "hdgst": false, 00:25:18.688 "ddgst": false 00:25:18.688 }, 00:25:18.689 "method": "bdev_nvme_attach_controller" 00:25:18.689 },{ 00:25:18.689 "params": { 00:25:18.689 "name": "Nvme9", 00:25:18.689 "trtype": "tcp", 00:25:18.689 "traddr": "10.0.0.2", 00:25:18.689 "adrfam": "ipv4", 00:25:18.689 "trsvcid": "4420", 00:25:18.689 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:18.689 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:18.689 "hdgst": false, 00:25:18.689 "ddgst": false 00:25:18.689 }, 00:25:18.689 "method": "bdev_nvme_attach_controller" 00:25:18.689 },{ 00:25:18.689 "params": { 00:25:18.689 "name": "Nvme10", 00:25:18.689 "trtype": "tcp", 00:25:18.689 "traddr": "10.0.0.2", 00:25:18.689 "adrfam": "ipv4", 00:25:18.689 "trsvcid": "4420", 00:25:18.689 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:18.689 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:18.689 "hdgst": false, 00:25:18.689 "ddgst": false 00:25:18.689 }, 00:25:18.689 "method": "bdev_nvme_attach_controller" 00:25:18.689 }' 00:25:18.689 [2024-12-05 13:57:01.052238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.689 [2024-12-05 13:57:01.092953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.064 Running I/O for 1 seconds... 00:25:21.139 2190.00 IOPS, 136.88 MiB/s 00:25:21.139 Latency(us) 00:25:21.139 [2024-12-05T12:57:03.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.139 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.139 Verification LBA range: start 0x0 length 0x400 00:25:21.139 Nvme1n1 : 1.15 277.99 17.37 0.00 0.00 227919.19 15291.73 215707.06 00:25:21.139 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.139 Verification LBA range: start 0x0 length 0x400 00:25:21.139 Nvme2n1 : 1.09 234.74 14.67 0.00 0.00 265484.43 29459.99 223696.21 00:25:21.139 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.139 Verification LBA range: start 0x0 length 0x400 00:25:21.139 Nvme3n1 : 1.14 280.91 17.56 0.00 0.00 218561.97 14792.41 213709.78 00:25:21.139 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.139 Verification LBA range: start 0x0 length 0x400 00:25:21.139 Nvme4n1 : 1.15 277.17 17.32 0.00 0.00 218171.39 13856.18 216705.71 00:25:21.139 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.139 Verification LBA range: start 0x0 length 0x400 00:25:21.139 Nvme5n1 : 1.17 277.58 17.35 0.00 0.00 213617.50 9050.21 224694.86 00:25:21.139 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.139 Verification LBA range: start 0x0 length 0x400 00:25:21.139 Nvme6n1 : 1.16 279.63 17.48 0.00 0.00 208977.40 4181.82 216705.71 00:25:21.139 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.139 Verification LBA range: start 0x0 length 0x400 00:25:21.139 Nvme7n1 : 1.16 278.43 17.40 0.00 0.00 206486.56 2683.86 228689.43 00:25:21.139 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.139 Verification LBA range: start 0x0 length 0x400 00:25:21.139 Nvme8n1 : 1.17 272.49 17.03 0.00 0.00 207646.72 14230.67 216705.71 00:25:21.139 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.139 Verification LBA range: start 0x0 length 0x400 00:25:21.139 Nvme9n1 : 1.18 271.75 16.98 0.00 0.00 205410.50 16727.28 215707.06 00:25:21.139 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.139 Verification LBA range: start 0x0 length 0x400 00:25:21.139 Nvme10n1 : 1.18 272.31 17.02 0.00 0.00 201424.31 18100.42 232684.01 00:25:21.139 [2024-12-05T12:57:03.726Z] =================================================================================================================== 00:25:21.139 [2024-12-05T12:57:03.726Z] Total : 2722.99 170.19 0.00 0.00 216360.21 2683.86 232684.01 00:25:21.426 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:25:21.426 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:21.426 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:21.426 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:21.426 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:21.426 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:21.426 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:21.427 rmmod nvme_tcp 00:25:21.427 rmmod nvme_fabrics 00:25:21.427 rmmod nvme_keyring 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 723121 ']' 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 723121 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 723121 ']' 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 723121 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 723121 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 723121' 00:25:21.427 killing process with pid 723121 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 723121 00:25:21.427 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 723121 00:25:21.993 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:21.993 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:21.993 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:21.993 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:25:21.993 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:25:21.993 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:21.993 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:25:21.993 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:21.993 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:21.993 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.993 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.993 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:23.895 00:25:23.895 real 0m15.248s 00:25:23.895 user 0m33.588s 00:25:23.895 sys 0m5.884s 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:23.895 ************************************ 00:25:23.895 END TEST nvmf_shutdown_tc1 00:25:23.895 ************************************ 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:23.895 ************************************ 00:25:23.895 START TEST nvmf_shutdown_tc2 00:25:23.895 ************************************ 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:23.895 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:23.896 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:23.896 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:23.896 Found net devices under 0000:86:00.0: cvl_0_0 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:23.896 Found net devices under 0000:86:00.1: cvl_0_1 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.896 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:24.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:25:24.155 00:25:24.155 --- 10.0.0.2 ping statistics --- 00:25:24.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.155 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:24.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:25:24.155 00:25:24.155 --- 10.0.0.1 ping statistics --- 00:25:24.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.155 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:24.155 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:24.413 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:24.413 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:24.413 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:24.413 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:24.413 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=725034 00:25:24.413 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 725034 00:25:24.413 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:24.413 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 725034 ']' 00:25:24.413 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.413 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.413 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.413 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.413 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:24.413 [2024-12-05 13:57:06.834208] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:24.413 [2024-12-05 13:57:06.834251] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.413 [2024-12-05 13:57:06.915720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:24.413 [2024-12-05 13:57:06.955599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.413 [2024-12-05 13:57:06.955635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.413 [2024-12-05 13:57:06.955642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.413 [2024-12-05 13:57:06.955648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.413 [2024-12-05 13:57:06.955653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.413 [2024-12-05 13:57:06.957290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.413 [2024-12-05 13:57:06.957411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:24.413 [2024-12-05 13:57:06.957519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:24.413 [2024-12-05 13:57:06.957519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:25.348 [2024-12-05 13:57:07.713197] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.348 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:25.348 Malloc1 00:25:25.348 [2024-12-05 13:57:07.818085] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.348 Malloc2 00:25:25.348 Malloc3 00:25:25.348 Malloc4 00:25:25.607 Malloc5 00:25:25.607 Malloc6 00:25:25.607 Malloc7 00:25:25.607 Malloc8 00:25:25.607 Malloc9 00:25:25.607 Malloc10 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=725378 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 725378 /var/tmp/bdevperf.sock 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 725378 ']' 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:25.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.865 { 00:25:25.865 "params": { 00:25:25.865 "name": "Nvme$subsystem", 00:25:25.865 "trtype": "$TEST_TRANSPORT", 00:25:25.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.865 "adrfam": "ipv4", 00:25:25.865 "trsvcid": "$NVMF_PORT", 00:25:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.865 "hdgst": ${hdgst:-false}, 00:25:25.865 "ddgst": ${ddgst:-false} 00:25:25.865 }, 00:25:25.865 "method": "bdev_nvme_attach_controller" 00:25:25.865 } 00:25:25.865 EOF 00:25:25.865 )") 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.865 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.865 { 00:25:25.865 "params": { 00:25:25.866 "name": "Nvme$subsystem", 00:25:25.866 "trtype": "$TEST_TRANSPORT", 00:25:25.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.866 "adrfam": "ipv4", 00:25:25.866 "trsvcid": "$NVMF_PORT", 00:25:25.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.866 "hdgst": ${hdgst:-false}, 00:25:25.866 "ddgst": ${ddgst:-false} 00:25:25.866 }, 00:25:25.866 "method": "bdev_nvme_attach_controller" 00:25:25.866 } 00:25:25.866 EOF 00:25:25.866 )") 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.866 { 00:25:25.866 "params": { 00:25:25.866 "name": "Nvme$subsystem", 00:25:25.866 "trtype": "$TEST_TRANSPORT", 00:25:25.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.866 "adrfam": "ipv4", 00:25:25.866 "trsvcid": "$NVMF_PORT", 00:25:25.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.866 "hdgst": ${hdgst:-false}, 00:25:25.866 "ddgst": ${ddgst:-false} 00:25:25.866 }, 00:25:25.866 "method": "bdev_nvme_attach_controller" 00:25:25.866 } 00:25:25.866 EOF 00:25:25.866 )") 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.866 { 00:25:25.866 "params": { 00:25:25.866 "name": "Nvme$subsystem", 00:25:25.866 "trtype": "$TEST_TRANSPORT", 00:25:25.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.866 "adrfam": "ipv4", 00:25:25.866 "trsvcid": "$NVMF_PORT", 00:25:25.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.866 "hdgst": ${hdgst:-false}, 00:25:25.866 "ddgst": ${ddgst:-false} 00:25:25.866 }, 00:25:25.866 "method": "bdev_nvme_attach_controller" 00:25:25.866 } 00:25:25.866 EOF 00:25:25.866 )") 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.866 { 00:25:25.866 "params": { 00:25:25.866 "name": "Nvme$subsystem", 00:25:25.866 "trtype": "$TEST_TRANSPORT", 00:25:25.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.866 "adrfam": "ipv4", 00:25:25.866 "trsvcid": "$NVMF_PORT", 00:25:25.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.866 "hdgst": ${hdgst:-false}, 00:25:25.866 "ddgst": ${ddgst:-false} 00:25:25.866 }, 00:25:25.866 "method": "bdev_nvme_attach_controller" 00:25:25.866 } 00:25:25.866 EOF 00:25:25.866 )") 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.866 { 00:25:25.866 "params": { 00:25:25.866 "name": "Nvme$subsystem", 00:25:25.866 "trtype": "$TEST_TRANSPORT", 00:25:25.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.866 "adrfam": "ipv4", 00:25:25.866 "trsvcid": "$NVMF_PORT", 00:25:25.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.866 "hdgst": ${hdgst:-false}, 00:25:25.866 "ddgst": ${ddgst:-false} 00:25:25.866 }, 00:25:25.866 "method": "bdev_nvme_attach_controller" 00:25:25.866 } 00:25:25.866 EOF 00:25:25.866 )") 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.866 { 00:25:25.866 "params": { 00:25:25.866 "name": "Nvme$subsystem", 00:25:25.866 "trtype": "$TEST_TRANSPORT", 00:25:25.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.866 "adrfam": "ipv4", 00:25:25.866 "trsvcid": "$NVMF_PORT", 00:25:25.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.866 "hdgst": ${hdgst:-false}, 00:25:25.866 "ddgst": ${ddgst:-false} 00:25:25.866 }, 00:25:25.866 "method": "bdev_nvme_attach_controller" 00:25:25.866 } 00:25:25.866 EOF 00:25:25.866 )") 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:25.866 [2024-12-05 13:57:08.295169] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:25.866 [2024-12-05 13:57:08.295225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725378 ] 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.866 { 00:25:25.866 "params": { 00:25:25.866 "name": "Nvme$subsystem", 00:25:25.866 "trtype": "$TEST_TRANSPORT", 00:25:25.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.866 "adrfam": "ipv4", 00:25:25.866 "trsvcid": "$NVMF_PORT", 00:25:25.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.866 "hdgst": ${hdgst:-false}, 00:25:25.866 "ddgst": ${ddgst:-false} 00:25:25.866 }, 00:25:25.866 "method": "bdev_nvme_attach_controller" 00:25:25.866 } 00:25:25.866 EOF 00:25:25.866 )") 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.866 { 00:25:25.866 "params": { 00:25:25.866 "name": "Nvme$subsystem", 00:25:25.866 "trtype": "$TEST_TRANSPORT", 00:25:25.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.866 "adrfam": "ipv4", 00:25:25.866 "trsvcid": "$NVMF_PORT", 00:25:25.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.866 "hdgst": ${hdgst:-false}, 00:25:25.866 "ddgst": ${ddgst:-false} 00:25:25.866 }, 00:25:25.866 "method": "bdev_nvme_attach_controller" 00:25:25.866 } 00:25:25.866 EOF 00:25:25.866 )") 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:25.866 { 00:25:25.866 "params": { 00:25:25.866 "name": "Nvme$subsystem", 00:25:25.866 "trtype": "$TEST_TRANSPORT", 00:25:25.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.866 "adrfam": "ipv4", 00:25:25.866 "trsvcid": "$NVMF_PORT", 00:25:25.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.866 "hdgst": ${hdgst:-false}, 00:25:25.866 "ddgst": ${ddgst:-false} 00:25:25.866 }, 00:25:25.866 "method": "bdev_nvme_attach_controller" 00:25:25.866 } 00:25:25.866 EOF 00:25:25.866 )") 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:25:25.866 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:25.866 "params": { 00:25:25.866 "name": "Nvme1", 00:25:25.866 "trtype": "tcp", 00:25:25.866 "traddr": "10.0.0.2", 00:25:25.866 "adrfam": "ipv4", 00:25:25.866 "trsvcid": "4420", 00:25:25.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:25.866 "hdgst": false, 00:25:25.866 "ddgst": false 00:25:25.866 }, 00:25:25.866 "method": "bdev_nvme_attach_controller" 00:25:25.866 },{ 00:25:25.866 "params": { 00:25:25.866 "name": "Nvme2", 00:25:25.866 "trtype": "tcp", 00:25:25.866 "traddr": "10.0.0.2", 00:25:25.866 "adrfam": "ipv4", 00:25:25.866 "trsvcid": "4420", 00:25:25.866 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:25.866 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:25.866 "hdgst": false, 00:25:25.866 "ddgst": false 00:25:25.866 }, 00:25:25.866 "method": "bdev_nvme_attach_controller" 00:25:25.866 },{ 00:25:25.866 "params": { 00:25:25.866 "name": "Nvme3", 00:25:25.866 "trtype": "tcp", 00:25:25.866 "traddr": "10.0.0.2", 00:25:25.866 "adrfam": "ipv4", 00:25:25.866 "trsvcid": "4420", 00:25:25.866 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:25.866 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:25.866 "hdgst": false, 00:25:25.867 "ddgst": false 00:25:25.867 }, 00:25:25.867 "method": "bdev_nvme_attach_controller" 00:25:25.867 },{ 00:25:25.867 "params": { 00:25:25.867 "name": "Nvme4", 00:25:25.867 "trtype": "tcp", 00:25:25.867 "traddr": "10.0.0.2", 00:25:25.867 "adrfam": "ipv4", 00:25:25.867 "trsvcid": "4420", 00:25:25.867 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:25.867 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:25.867 "hdgst": false, 00:25:25.867 "ddgst": false 00:25:25.867 }, 00:25:25.867 "method": "bdev_nvme_attach_controller" 00:25:25.867 },{ 00:25:25.867 "params": { 00:25:25.867 "name": "Nvme5", 00:25:25.867 "trtype": "tcp", 00:25:25.867 "traddr": "10.0.0.2", 00:25:25.867 "adrfam": "ipv4", 00:25:25.867 "trsvcid": "4420", 00:25:25.867 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:25.867 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:25.867 "hdgst": false, 00:25:25.867 "ddgst": false 00:25:25.867 }, 00:25:25.867 "method": "bdev_nvme_attach_controller" 00:25:25.867 },{ 00:25:25.867 "params": { 00:25:25.867 "name": "Nvme6", 00:25:25.867 "trtype": "tcp", 00:25:25.867 "traddr": "10.0.0.2", 00:25:25.867 "adrfam": "ipv4", 00:25:25.867 "trsvcid": "4420", 00:25:25.867 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:25.867 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:25.867 "hdgst": false, 00:25:25.867 "ddgst": false 00:25:25.867 }, 00:25:25.867 "method": "bdev_nvme_attach_controller" 00:25:25.867 },{ 00:25:25.867 "params": { 00:25:25.867 "name": "Nvme7", 00:25:25.867 "trtype": "tcp", 00:25:25.867 "traddr": "10.0.0.2", 00:25:25.867 "adrfam": "ipv4", 00:25:25.867 "trsvcid": "4420", 00:25:25.867 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:25.867 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:25.867 "hdgst": false, 00:25:25.867 "ddgst": false 00:25:25.867 }, 00:25:25.867 "method": "bdev_nvme_attach_controller" 00:25:25.867 },{ 00:25:25.867 "params": { 00:25:25.867 "name": "Nvme8", 00:25:25.867 "trtype": "tcp", 00:25:25.867 "traddr": "10.0.0.2", 00:25:25.867 "adrfam": "ipv4", 00:25:25.867 "trsvcid": "4420", 00:25:25.867 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:25.867 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:25.867 "hdgst": false, 00:25:25.867 "ddgst": false 00:25:25.867 }, 00:25:25.867 "method": "bdev_nvme_attach_controller" 00:25:25.867 },{ 00:25:25.867 "params": { 00:25:25.867 "name": "Nvme9", 00:25:25.867 "trtype": "tcp", 00:25:25.867 "traddr": "10.0.0.2", 00:25:25.867 "adrfam": "ipv4", 00:25:25.867 "trsvcid": "4420", 00:25:25.867 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:25.867 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:25.867 "hdgst": false, 00:25:25.867 "ddgst": false 00:25:25.867 }, 00:25:25.867 "method": "bdev_nvme_attach_controller" 00:25:25.867 },{ 00:25:25.867 "params": { 00:25:25.867 "name": "Nvme10", 00:25:25.867 "trtype": "tcp", 00:25:25.867 "traddr": "10.0.0.2", 00:25:25.867 "adrfam": "ipv4", 00:25:25.867 "trsvcid": "4420", 00:25:25.867 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:25.867 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:25.867 "hdgst": false, 00:25:25.867 "ddgst": false 00:25:25.867 }, 00:25:25.867 "method": "bdev_nvme_attach_controller" 00:25:25.867 }' 00:25:25.867 [2024-12-05 13:57:08.374917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.867 [2024-12-05 13:57:08.415753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.242 Running I/O for 10 seconds... 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 725378 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 725378 ']' 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 725378 00:25:27.809 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:27.810 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.810 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 725378 00:25:27.810 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:27.810 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:27.810 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 725378' 00:25:27.810 killing process with pid 725378 00:25:27.810 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 725378 00:25:27.810 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 725378 00:25:28.069 Received shutdown signal, test time was about 0.694741 seconds 00:25:28.069 00:25:28.069 Latency(us) 00:25:28.069 [2024-12-05T12:57:10.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.069 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:28.069 Verification LBA range: start 0x0 length 0x400 00:25:28.069 Nvme1n1 : 0.66 289.27 18.08 0.00 0.00 217521.33 15978.30 217704.35 00:25:28.069 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:28.069 Verification LBA range: start 0x0 length 0x400 00:25:28.069 Nvme2n1 : 0.68 282.66 17.67 0.00 0.00 217865.59 33953.89 190740.97 00:25:28.069 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:28.069 Verification LBA range: start 0x0 length 0x400 00:25:28.069 Nvme3n1 : 0.66 296.25 18.52 0.00 0.00 200874.97 4774.77 223696.21 00:25:28.069 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:28.069 Verification LBA range: start 0x0 length 0x400 00:25:28.069 Nvme4n1 : 0.67 285.14 17.82 0.00 0.00 205620.50 16103.13 197731.47 00:25:28.069 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:28.069 Verification LBA range: start 0x0 length 0x400 00:25:28.069 Nvme5n1 : 0.69 279.57 17.47 0.00 0.00 204931.49 15853.47 219701.64 00:25:28.069 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:28.069 Verification LBA range: start 0x0 length 0x400 00:25:28.069 Nvme6n1 : 0.69 277.81 17.36 0.00 0.00 201242.98 17601.10 219701.64 00:25:28.069 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:28.069 Verification LBA range: start 0x0 length 0x400 00:25:28.069 Nvme7n1 : 0.68 284.13 17.76 0.00 0.00 191015.98 17850.76 218702.99 00:25:28.069 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:28.069 Verification LBA range: start 0x0 length 0x400 00:25:28.069 Nvme8n1 : 0.68 281.09 17.57 0.00 0.00 188292.47 22469.49 194735.54 00:25:28.069 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:28.069 Verification LBA range: start 0x0 length 0x400 00:25:28.069 Nvme9n1 : 0.67 212.65 13.29 0.00 0.00 235233.80 9861.61 241671.80 00:25:28.069 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:28.069 Verification LBA range: start 0x0 length 0x400 00:25:28.069 Nvme10n1 : 0.69 276.62 17.29 0.00 0.00 181854.60 17601.10 221698.93 00:25:28.069 [2024-12-05T12:57:10.656Z] =================================================================================================================== 00:25:28.069 [2024-12-05T12:57:10.656Z] Total : 2765.18 172.82 0.00 0.00 203614.83 4774.77 241671.80 00:25:28.069 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:29.446 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 725034 00:25:29.446 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:29.446 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:29.446 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:29.446 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:29.446 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:29.446 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:29.446 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:25:29.446 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:29.446 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:25:29.446 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:29.446 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:29.446 rmmod nvme_tcp 00:25:29.446 rmmod nvme_fabrics 00:25:29.446 rmmod nvme_keyring 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 725034 ']' 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 725034 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 725034 ']' 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 725034 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 725034 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 725034' 00:25:29.447 killing process with pid 725034 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 725034 00:25:29.447 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 725034 00:25:29.706 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:29.706 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:29.706 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:29.706 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:25:29.706 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:25:29.706 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:29.706 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:25:29.706 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:29.706 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:29.706 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.706 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.706 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.612 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:31.612 00:25:31.612 real 0m7.724s 00:25:31.612 user 0m22.849s 00:25:31.612 sys 0m1.302s 00:25:31.612 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:31.612 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.612 ************************************ 00:25:31.612 END TEST nvmf_shutdown_tc2 00:25:31.612 ************************************ 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:31.872 ************************************ 00:25:31.872 START TEST nvmf_shutdown_tc3 00:25:31.872 ************************************ 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:31.872 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.872 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:31.873 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:31.873 Found net devices under 0000:86:00.0: cvl_0_0 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:31.873 Found net devices under 0000:86:00.1: cvl_0_1 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:31.873 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:32.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:25:32.133 00:25:32.133 --- 10.0.0.2 ping statistics --- 00:25:32.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.133 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:25:32.133 00:25:32.133 --- 10.0.0.1 ping statistics --- 00:25:32.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.133 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=726745 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 726745 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 726745 ']' 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.133 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:32.133 [2024-12-05 13:57:14.605399] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:32.133 [2024-12-05 13:57:14.605443] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.133 [2024-12-05 13:57:14.682132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:32.393 [2024-12-05 13:57:14.724279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.393 [2024-12-05 13:57:14.724313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.393 [2024-12-05 13:57:14.724320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.393 [2024-12-05 13:57:14.724326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.393 [2024-12-05 13:57:14.724331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.393 [2024-12-05 13:57:14.725830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.393 [2024-12-05 13:57:14.725938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:32.393 [2024-12-05 13:57:14.726066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.393 [2024-12-05 13:57:14.726067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:32.393 [2024-12-05 13:57:14.862557] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.393 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:32.393 Malloc1 00:25:32.393 [2024-12-05 13:57:14.968518] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.651 Malloc2 00:25:32.651 Malloc3 00:25:32.651 Malloc4 00:25:32.651 Malloc5 00:25:32.651 Malloc6 00:25:32.651 Malloc7 00:25:32.910 Malloc8 00:25:32.910 Malloc9 00:25:32.910 Malloc10 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=727012 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 727012 /var/tmp/bdevperf.sock 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 727012 ']' 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:32.910 { 00:25:32.910 "params": { 00:25:32.910 "name": "Nvme$subsystem", 00:25:32.910 "trtype": "$TEST_TRANSPORT", 00:25:32.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.910 "adrfam": "ipv4", 00:25:32.910 "trsvcid": "$NVMF_PORT", 00:25:32.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.910 "hdgst": ${hdgst:-false}, 00:25:32.910 "ddgst": ${ddgst:-false} 00:25:32.910 }, 00:25:32.910 "method": "bdev_nvme_attach_controller" 00:25:32.910 } 00:25:32.910 EOF 00:25:32.910 )") 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:32.910 { 00:25:32.910 "params": { 00:25:32.910 "name": "Nvme$subsystem", 00:25:32.910 "trtype": "$TEST_TRANSPORT", 00:25:32.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.910 "adrfam": "ipv4", 00:25:32.910 "trsvcid": "$NVMF_PORT", 00:25:32.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.910 "hdgst": ${hdgst:-false}, 00:25:32.910 "ddgst": ${ddgst:-false} 00:25:32.910 }, 00:25:32.910 "method": "bdev_nvme_attach_controller" 00:25:32.910 } 00:25:32.910 EOF 00:25:32.910 )") 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:32.910 { 00:25:32.910 "params": { 00:25:32.910 "name": "Nvme$subsystem", 00:25:32.910 "trtype": "$TEST_TRANSPORT", 00:25:32.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.910 "adrfam": "ipv4", 00:25:32.910 "trsvcid": "$NVMF_PORT", 00:25:32.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.910 "hdgst": ${hdgst:-false}, 00:25:32.910 "ddgst": ${ddgst:-false} 00:25:32.910 }, 00:25:32.910 "method": "bdev_nvme_attach_controller" 00:25:32.910 } 00:25:32.910 EOF 00:25:32.910 )") 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:32.910 { 00:25:32.910 "params": { 00:25:32.910 "name": "Nvme$subsystem", 00:25:32.910 "trtype": "$TEST_TRANSPORT", 00:25:32.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.910 "adrfam": "ipv4", 00:25:32.910 "trsvcid": "$NVMF_PORT", 00:25:32.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.910 "hdgst": ${hdgst:-false}, 00:25:32.910 "ddgst": ${ddgst:-false} 00:25:32.910 }, 00:25:32.910 "method": "bdev_nvme_attach_controller" 00:25:32.910 } 00:25:32.910 EOF 00:25:32.910 )") 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:32.910 { 00:25:32.910 "params": { 00:25:32.910 "name": "Nvme$subsystem", 00:25:32.910 "trtype": "$TEST_TRANSPORT", 00:25:32.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.910 "adrfam": "ipv4", 00:25:32.910 "trsvcid": "$NVMF_PORT", 00:25:32.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.910 "hdgst": ${hdgst:-false}, 00:25:32.910 "ddgst": ${ddgst:-false} 00:25:32.910 }, 00:25:32.910 "method": "bdev_nvme_attach_controller" 00:25:32.910 } 00:25:32.910 EOF 00:25:32.910 )") 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:32.910 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:32.910 { 00:25:32.910 "params": { 00:25:32.910 "name": "Nvme$subsystem", 00:25:32.911 "trtype": "$TEST_TRANSPORT", 00:25:32.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "$NVMF_PORT", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.911 "hdgst": ${hdgst:-false}, 00:25:32.911 "ddgst": ${ddgst:-false} 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 } 00:25:32.911 EOF 00:25:32.911 )") 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:32.911 { 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme$subsystem", 00:25:32.911 "trtype": "$TEST_TRANSPORT", 00:25:32.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "$NVMF_PORT", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.911 "hdgst": ${hdgst:-false}, 00:25:32.911 "ddgst": ${ddgst:-false} 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 } 00:25:32.911 EOF 00:25:32.911 )") 00:25:32.911 [2024-12-05 13:57:15.439103] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:32.911 [2024-12-05 13:57:15.439155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727012 ] 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:32.911 { 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme$subsystem", 00:25:32.911 "trtype": "$TEST_TRANSPORT", 00:25:32.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "$NVMF_PORT", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.911 "hdgst": ${hdgst:-false}, 00:25:32.911 "ddgst": ${ddgst:-false} 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 } 00:25:32.911 EOF 00:25:32.911 )") 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:32.911 { 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme$subsystem", 00:25:32.911 "trtype": "$TEST_TRANSPORT", 00:25:32.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "$NVMF_PORT", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.911 "hdgst": ${hdgst:-false}, 00:25:32.911 "ddgst": ${ddgst:-false} 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 } 00:25:32.911 EOF 00:25:32.911 )") 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:32.911 { 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme$subsystem", 00:25:32.911 "trtype": "$TEST_TRANSPORT", 00:25:32.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "$NVMF_PORT", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.911 "hdgst": ${hdgst:-false}, 00:25:32.911 "ddgst": ${ddgst:-false} 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 } 00:25:32.911 EOF 00:25:32.911 )") 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:25:32.911 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme1", 00:25:32.911 "trtype": "tcp", 00:25:32.911 "traddr": "10.0.0.2", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "4420", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:32.911 "hdgst": false, 00:25:32.911 "ddgst": false 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 },{ 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme2", 00:25:32.911 "trtype": "tcp", 00:25:32.911 "traddr": "10.0.0.2", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "4420", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:32.911 "hdgst": false, 00:25:32.911 "ddgst": false 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 },{ 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme3", 00:25:32.911 "trtype": "tcp", 00:25:32.911 "traddr": "10.0.0.2", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "4420", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:32.911 "hdgst": false, 00:25:32.911 "ddgst": false 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 },{ 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme4", 00:25:32.911 "trtype": "tcp", 00:25:32.911 "traddr": "10.0.0.2", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "4420", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:32.911 "hdgst": false, 00:25:32.911 "ddgst": false 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 },{ 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme5", 00:25:32.911 "trtype": "tcp", 00:25:32.911 "traddr": "10.0.0.2", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "4420", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:32.911 "hdgst": false, 00:25:32.911 "ddgst": false 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 },{ 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme6", 00:25:32.911 "trtype": "tcp", 00:25:32.911 "traddr": "10.0.0.2", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "4420", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:32.911 "hdgst": false, 00:25:32.911 "ddgst": false 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 },{ 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme7", 00:25:32.911 "trtype": "tcp", 00:25:32.911 "traddr": "10.0.0.2", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "4420", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:32.911 "hdgst": false, 00:25:32.911 "ddgst": false 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 },{ 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme8", 00:25:32.911 "trtype": "tcp", 00:25:32.911 "traddr": "10.0.0.2", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "4420", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:32.911 "hdgst": false, 00:25:32.911 "ddgst": false 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 },{ 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme9", 00:25:32.911 "trtype": "tcp", 00:25:32.911 "traddr": "10.0.0.2", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "4420", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:32.911 "hdgst": false, 00:25:32.911 "ddgst": false 00:25:32.911 }, 00:25:32.911 "method": "bdev_nvme_attach_controller" 00:25:32.911 },{ 00:25:32.911 "params": { 00:25:32.911 "name": "Nvme10", 00:25:32.911 "trtype": "tcp", 00:25:32.911 "traddr": "10.0.0.2", 00:25:32.911 "adrfam": "ipv4", 00:25:32.911 "trsvcid": "4420", 00:25:32.911 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:32.911 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:32.912 "hdgst": false, 00:25:32.912 "ddgst": false 00:25:32.912 }, 00:25:32.912 "method": "bdev_nvme_attach_controller" 00:25:32.912 }' 00:25:33.169 [2024-12-05 13:57:15.513773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.169 [2024-12-05 13:57:15.554939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.543 Running I/O for 10 seconds... 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:34.805 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.064 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=80 00:25:35.064 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 80 -ge 100 ']' 00:25:35.064 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:35.064 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 726745 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 726745 ']' 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 726745 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 726745 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 726745' 00:25:35.340 killing process with pid 726745 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 726745 00:25:35.340 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 726745 00:25:35.340 [2024-12-05 13:57:17.758415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.340 [2024-12-05 13:57:17.758790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.758796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.758802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.758808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.758815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.758822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.758828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.758834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.758842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.758848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.758854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.758860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.758865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.758872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4f8d0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.760498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3e60 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.761739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.761750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.761758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.761765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.761771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.761779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.761787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.761793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.341 [2024-12-05 13:57:17.761799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.761996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.762155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4fdc0 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.342 [2024-12-05 13:57:17.763578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.763825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50290 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.764999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.343 [2024-12-05 13:57:17.765151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50780 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.765996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.344 [2024-12-05 13:57:17.766263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.766269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.766275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.766281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.766287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.766293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.766299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.766305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.766313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.766319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c50 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.767875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.767888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.767895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.767902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.767908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.767967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.767973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.767979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.767986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.767992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.767998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.768318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51610 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.769119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.769135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.769142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.769148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.769155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.769162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.769168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.769174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.769181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.769187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.769193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.769200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.345 [2024-12-05 13:57:17.769206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.769401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.775436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965820 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.775555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1961d80 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.775636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502c70 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.775717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1504d30 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.775797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f9200 is same with the state(6) to be set 00:25:35.346 [2024-12-05 13:57:17.775876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.346 [2024-12-05 13:57:17.775905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.346 [2024-12-05 13:57:17.775912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.775919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.347 [2024-12-05 13:57:17.775925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.775931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419610 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.775961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.347 [2024-12-05 13:57:17.775970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.775977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.347 [2024-12-05 13:57:17.775990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.775997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.347 [2024-12-05 13:57:17.776004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.347 [2024-12-05 13:57:17.776018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1930240 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.347 [2024-12-05 13:57:17.776054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.347 [2024-12-05 13:57:17.776069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.347 [2024-12-05 13:57:17.776083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.347 [2024-12-05 13:57:17.776097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15051c0 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.347 [2024-12-05 13:57:17.776134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.347 [2024-12-05 13:57:17.776148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.347 [2024-12-05 13:57:17.776161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.347 [2024-12-05 13:57:17.776175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1930cf0 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with [2024-12-05 13:57:17.776566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:35.347 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with [2024-12-05 13:57:17.776611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12the state(6) to be set 00:25:35.347 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:57:17.776641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:1[2024-12-05 13:57:17.776669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with [2024-12-05 13:57:17.776678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:35.347 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51b00 is same with the state(6) to be set 00:25:35.347 [2024-12-05 13:57:17.776698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.347 [2024-12-05 13:57:17.776752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.347 [2024-12-05 13:57:17.776758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.776990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.776997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.348 [2024-12-05 13:57:17.777284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.348 [2024-12-05 13:57:17.777292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.348 [2024-12-05 13:57:17.777295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.348 [2024-12-05 13:57:17.777300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.348 [2024-12-05 13:57:17.777302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1[2024-12-05 13:57:17.777309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.349 the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:57:17.777322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.349 the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.349 [2024-12-05 13:57:17.777337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.349 [2024-12-05 13:57:17.777344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-12-05 13:57:17.777350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.349 the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:57:17.777358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.349 the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.349 [2024-12-05 13:57:17.777380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.349 [2024-12-05 13:57:17.777386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1[2024-12-05 13:57:17.777393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.349 the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with [2024-12-05 13:57:17.777402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:35.349 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.349 [2024-12-05 13:57:17.777412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.349 [2024-12-05 13:57:17.777418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.349 [2024-12-05 13:57:17.777425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with [2024-12-05 13:57:17.777432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1the state(6) to be set 00:25:35.349 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.349 [2024-12-05 13:57:17.777441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.349 [2024-12-05 13:57:17.777448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.349 [2024-12-05 13:57:17.777454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.349 [2024-12-05 13:57:17.777461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.349 [2024-12-05 13:57:17.777474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.349 [2024-12-05 13:57:17.777481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1[2024-12-05 13:57:17.777487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.349 the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with [2024-12-05 13:57:17.777496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:35.349 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.349 [2024-12-05 13:57:17.777504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.349 [2024-12-05 13:57:17.777511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.349 [2024-12-05 13:57:17.777518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-12-05 13:57:17.777525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.349 the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:57:17.777534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.349 the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with [2024-12-05 13:57:17.777546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190ba30 is same the state(6) to be set 00:25:35.349 with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.349 [2024-12-05 13:57:17.777617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.777717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51fd0 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.779085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:35.350 [2024-12-05 13:57:17.779118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1502c70 (9): Bad file descriptor 00:25:35.350 [2024-12-05 13:57:17.780038] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:35.350 [2024-12-05 13:57:17.780183] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:35.350 [2024-12-05 13:57:17.780308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.350 [2024-12-05 13:57:17.780324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1502c70 with addr=10.0.0.2, port=4420 00:25:35.350 [2024-12-05 13:57:17.780332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502c70 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.780385] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:35.350 [2024-12-05 13:57:17.780430] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:35.350 [2024-12-05 13:57:17.780473] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:35.350 [2024-12-05 13:57:17.780517] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:35.350 [2024-12-05 13:57:17.780561] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:35.350 [2024-12-05 13:57:17.780626] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:35.350 [2024-12-05 13:57:17.780697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1502c70 (9): Bad file descriptor 00:25:35.350 [2024-12-05 13:57:17.780804] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:35.350 [2024-12-05 13:57:17.780820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:35.350 [2024-12-05 13:57:17.780828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:35.350 [2024-12-05 13:57:17.780836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:35.350 [2024-12-05 13:57:17.780845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:35.350 [2024-12-05 13:57:17.785431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965820 (9): Bad file descriptor 00:25:35.350 [2024-12-05 13:57:17.785454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1961d80 (9): Bad file descriptor 00:25:35.350 [2024-12-05 13:57:17.785471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1504d30 (9): Bad file descriptor 00:25:35.350 [2024-12-05 13:57:17.785486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f9200 (9): Bad file descriptor 00:25:35.350 [2024-12-05 13:57:17.785500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1419610 (9): Bad file descriptor 00:25:35.350 [2024-12-05 13:57:17.785530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.350 [2024-12-05 13:57:17.785539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.785547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.350 [2024-12-05 13:57:17.785558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.785566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.350 [2024-12-05 13:57:17.785572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.785579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.350 [2024-12-05 13:57:17.785586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.785592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d100 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.785607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1930240 (9): Bad file descriptor 00:25:35.350 [2024-12-05 13:57:17.785622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15051c0 (9): Bad file descriptor 00:25:35.350 [2024-12-05 13:57:17.785634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1930cf0 (9): Bad file descriptor 00:25:35.350 [2024-12-05 13:57:17.789650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:35.350 [2024-12-05 13:57:17.789871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.350 [2024-12-05 13:57:17.789887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1502c70 with addr=10.0.0.2, port=4420 00:25:35.350 [2024-12-05 13:57:17.789895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502c70 is same with the state(6) to be set 00:25:35.350 [2024-12-05 13:57:17.789935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1502c70 (9): Bad file descriptor 00:25:35.350 [2024-12-05 13:57:17.789972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:35.350 [2024-12-05 13:57:17.789980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:35.350 [2024-12-05 13:57:17.789989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:35.350 [2024-12-05 13:57:17.789997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:35.350 [2024-12-05 13:57:17.795483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197d100 (9): Bad file descriptor 00:25:35.350 [2024-12-05 13:57:17.795629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.350 [2024-12-05 13:57:17.795642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.795657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.350 [2024-12-05 13:57:17.795665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.795674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.350 [2024-12-05 13:57:17.795681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.795690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.350 [2024-12-05 13:57:17.795697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.795710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.350 [2024-12-05 13:57:17.795717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.795726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.350 [2024-12-05 13:57:17.795733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.795742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.350 [2024-12-05 13:57:17.795748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.795757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.350 [2024-12-05 13:57:17.795764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.795772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.350 [2024-12-05 13:57:17.795779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.795787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.350 [2024-12-05 13:57:17.795794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.350 [2024-12-05 13:57:17.795803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.350 [2024-12-05 13:57:17.795810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.795818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.795826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.795834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.795840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.795849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.795855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.795864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.795870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.795879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.795885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.795894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.795902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.795911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.795917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.795926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.795932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.795941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.795947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.795955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.795962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.795970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.795976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.795985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.795994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.351 [2024-12-05 13:57:17.796389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.351 [2024-12-05 13:57:17.796396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.796622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.796630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:25:35.352 [2024-12-05 13:57:17.797638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.797989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.797997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.352 [2024-12-05 13:57:17.798005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.352 [2024-12-05 13:57:17.798012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.353 [2024-12-05 13:57:17.798505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.353 [2024-12-05 13:57:17.798512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.798520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.798527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.798535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.798541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.798549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.798556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.798564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.798571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.798579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.798585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.798596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.798602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.798611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.798617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.798625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170a150 is same with the state(6) to be set 00:25:35.354 [2024-12-05 13:57:17.799618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.799986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.799994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.800001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.800009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.800017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.800025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.800032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.800041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.800047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.800056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.800063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.800072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.800079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.800087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.800094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.354 [2024-12-05 13:57:17.800102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.354 [2024-12-05 13:57:17.800109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.800592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.800600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170b220 is same with the state(6) to be set 00:25:35.355 [2024-12-05 13:57:17.801595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.801607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.801618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.801625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.801634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.801641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.801649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.801657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.801664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.801671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.801680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.801686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.355 [2024-12-05 13:57:17.801695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.355 [2024-12-05 13:57:17.801702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.801987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.801994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.356 [2024-12-05 13:57:17.802299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.356 [2024-12-05 13:57:17.802306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.802569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.802577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1909520 is same with the state(6) to be set 00:25:35.357 [2024-12-05 13:57:17.803561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.357 [2024-12-05 13:57:17.803899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.357 [2024-12-05 13:57:17.803909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.803915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.803924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.803930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.803939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.803946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.803953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.803960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.803968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.803974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.803982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.803988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.803997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.358 [2024-12-05 13:57:17.804442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.358 [2024-12-05 13:57:17.804449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.804457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.804464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.804472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.804478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.804487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.804495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.804503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.804511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.804520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.804526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.804535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.804541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.804549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.804556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.804563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a770 is same with the state(6) to be set 00:25:35.359 [2024-12-05 13:57:17.805548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.805985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.805992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.806000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.806007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.806015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.806022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.806030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.359 [2024-12-05 13:57:17.806037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.359 [2024-12-05 13:57:17.806045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.806301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.806307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.811751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.811758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190ccf0 is same with the state(6) to be set 00:25:35.360 [2024-12-05 13:57:17.812791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.812804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.812818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.812826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.812836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.812844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.812853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.812861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.360 [2024-12-05 13:57:17.812871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.360 [2024-12-05 13:57:17.812878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.812888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.812899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.812910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.812918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.812928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.812936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.812946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.812954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.812964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.812972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.812981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.812989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.812998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.361 [2024-12-05 13:57:17.813580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.361 [2024-12-05 13:57:17.813587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.813920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.813928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28538a0 is same with the state(6) to be set 00:25:35.362 [2024-12-05 13:57:17.815076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.362 [2024-12-05 13:57:17.815343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.362 [2024-12-05 13:57:17.815351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.815985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.815994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.363 [2024-12-05 13:57:17.816004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.363 [2024-12-05 13:57:17.816011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.816021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.816028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.816038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.816046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.816057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.816065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.816074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.816082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.816092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.816099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.816109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.816117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.816127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.816134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.816144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.816152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.816161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.816169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.816179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.816187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.816196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.816204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.816213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.816221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.816229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1749f80 is same with the state(6) to be set 00:25:35.364 [2024-12-05 13:57:17.817346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:35.364 [2024-12-05 13:57:17.817364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:35.364 [2024-12-05 13:57:17.817380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:35.364 [2024-12-05 13:57:17.817450] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:25:35.364 [2024-12-05 13:57:17.817464] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:25:35.364 [2024-12-05 13:57:17.817481] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:25:35.364 [2024-12-05 13:57:17.817500] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:25:35.364 [2024-12-05 13:57:17.817513] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:25:35.364 [2024-12-05 13:57:17.817610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:35.364 [2024-12-05 13:57:17.817624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:25:35.364 [2024-12-05 13:57:17.817635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:35.364 [2024-12-05 13:57:17.817647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:25:35.364 [2024-12-05 13:57:17.817883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.364 [2024-12-05 13:57:17.817898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15051c0 with addr=10.0.0.2, port=4420 00:25:35.364 [2024-12-05 13:57:17.817908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15051c0 is same with the state(6) to be set 00:25:35.364 [2024-12-05 13:57:17.818153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.364 [2024-12-05 13:57:17.818166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f9200 with addr=10.0.0.2, port=4420 00:25:35.364 [2024-12-05 13:57:17.818174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f9200 is same with the state(6) to be set 00:25:35.364 [2024-12-05 13:57:17.818312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.364 [2024-12-05 13:57:17.818324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504d30 with addr=10.0.0.2, port=4420 00:25:35.364 [2024-12-05 13:57:17.818332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1504d30 is same with the state(6) to be set 00:25:35.364 [2024-12-05 13:57:17.820314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:35.364 [2024-12-05 13:57:17.820523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.364 [2024-12-05 13:57:17.820539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1930cf0 with addr=10.0.0.2, port=4420 00:25:35.364 [2024-12-05 13:57:17.820548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1930cf0 is same with the state(6) to be set 00:25:35.364 [2024-12-05 13:57:17.820744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.364 [2024-12-05 13:57:17.820757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1930240 with addr=10.0.0.2, port=4420 00:25:35.364 [2024-12-05 13:57:17.820765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1930240 is same with the state(6) to be set 00:25:35.364 [2024-12-05 13:57:17.820908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.364 [2024-12-05 13:57:17.820920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1419610 with addr=10.0.0.2, port=4420 00:25:35.364 [2024-12-05 13:57:17.820927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419610 is same with the state(6) to be set 00:25:35.364 [2024-12-05 13:57:17.821189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.364 [2024-12-05 13:57:17.821201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1965820 with addr=10.0.0.2, port=4420 00:25:35.364 [2024-12-05 13:57:17.821210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965820 is same with the state(6) to be set 00:25:35.364 [2024-12-05 13:57:17.821226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15051c0 (9): Bad file descriptor 00:25:35.364 [2024-12-05 13:57:17.821238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f9200 (9): Bad file descriptor 00:25:35.364 [2024-12-05 13:57:17.821247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1504d30 (9): Bad file descriptor 00:25:35.364 [2024-12-05 13:57:17.821358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.821378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.821392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.821400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.821410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.821418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.821427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.821435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.821445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.821452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.821462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.821470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.821479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.821487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.821496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.821503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.821513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.821520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.821530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.821537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.364 [2024-12-05 13:57:17.821546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.364 [2024-12-05 13:57:17.821554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.821988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.821999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.365 [2024-12-05 13:57:17.822232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.365 [2024-12-05 13:57:17.822242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.366 [2024-12-05 13:57:17.822481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.366 [2024-12-05 13:57:17.822490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748cf0 is same with the state(6) to be set 00:25:35.366 [2024-12-05 13:57:17.823468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:35.366 task offset: 24576 on job bdev=Nvme6n1 fails 00:25:35.366 00:25:35.366 Latency(us) 00:25:35.366 [2024-12-05T12:57:17.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.366 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.366 Job: Nvme1n1 ended in about 0.87 seconds with error 00:25:35.366 Verification LBA range: start 0x0 length 0x400 00:25:35.366 Nvme1n1 : 0.87 219.61 13.73 73.20 0.00 216314.15 27213.04 201726.05 00:25:35.366 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.366 Job: Nvme2n1 ended in about 0.88 seconds with error 00:25:35.366 Verification LBA range: start 0x0 length 0x400 00:25:35.366 Nvme2n1 : 0.88 219.11 13.69 73.04 0.00 212959.33 15978.30 206719.27 00:25:35.366 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.366 Job: Nvme3n1 ended in about 0.88 seconds with error 00:25:35.366 Verification LBA range: start 0x0 length 0x400 00:25:35.366 Nvme3n1 : 0.88 223.17 13.95 72.87 0.00 206354.02 15478.98 219701.64 00:25:35.366 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.366 Job: Nvme4n1 ended in about 0.88 seconds with error 00:25:35.366 Verification LBA range: start 0x0 length 0x400 00:25:35.366 Nvme4n1 : 0.88 218.13 13.63 72.71 0.00 206225.55 26963.38 201726.05 00:25:35.366 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.366 Job: Nvme5n1 ended in about 0.88 seconds with error 00:25:35.366 Verification LBA range: start 0x0 length 0x400 00:25:35.366 Nvme5n1 : 0.88 145.09 9.07 72.55 0.00 270657.50 18849.40 233682.65 00:25:35.366 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.366 Job: Nvme6n1 ended in about 0.86 seconds with error 00:25:35.366 Verification LBA range: start 0x0 length 0x400 00:25:35.366 Nvme6n1 : 0.86 224.38 14.02 74.79 0.00 192384.98 3136.37 220700.28 00:25:35.366 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.366 Job: Nvme7n1 ended in about 0.89 seconds with error 00:25:35.366 Verification LBA range: start 0x0 length 0x400 00:25:35.366 Nvme7n1 : 0.89 222.63 13.91 71.96 0.00 192430.71 14293.09 214708.42 00:25:35.366 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.366 Job: Nvme8n1 ended in about 0.89 seconds with error 00:25:35.366 Verification LBA range: start 0x0 length 0x400 00:25:35.366 Nvme8n1 : 0.89 215.35 13.46 71.78 0.00 193672.78 13544.11 223696.21 00:25:35.366 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.366 Job: Nvme9n1 ended in about 0.90 seconds with error 00:25:35.366 Verification LBA range: start 0x0 length 0x400 00:25:35.366 Nvme9n1 : 0.90 221.09 13.82 71.10 0.00 186791.25 25964.74 184749.10 00:25:35.366 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:35.366 Job: Nvme10n1 ended in about 0.89 seconds with error 00:25:35.366 Verification LBA range: start 0x0 length 0x400 00:25:35.366 Nvme10n1 : 0.89 143.20 8.95 71.60 0.00 248755.28 17476.27 234681.30 00:25:35.366 [2024-12-05T12:57:17.953Z] =================================================================================================================== 00:25:35.366 [2024-12-05T12:57:17.953Z] Total : 2051.74 128.23 725.60 0.00 210061.57 3136.37 234681.30 00:25:35.366 [2024-12-05 13:57:17.855786] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:35.366 [2024-12-05 13:57:17.855835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:25:35.366 [2024-12-05 13:57:17.856168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.366 [2024-12-05 13:57:17.856186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1961d80 with addr=10.0.0.2, port=4420 00:25:35.366 [2024-12-05 13:57:17.856196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1961d80 is same with the state(6) to be set 00:25:35.366 [2024-12-05 13:57:17.856212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1930cf0 (9): Bad file descriptor 00:25:35.366 [2024-12-05 13:57:17.856223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1930240 (9): Bad file descriptor 00:25:35.366 [2024-12-05 13:57:17.856232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1419610 (9): Bad file descriptor 00:25:35.366 [2024-12-05 13:57:17.856241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965820 (9): Bad file descriptor 00:25:35.366 [2024-12-05 13:57:17.856250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:35.366 [2024-12-05 13:57:17.856256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:35.366 [2024-12-05 13:57:17.856264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:35.366 [2024-12-05 13:57:17.856274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:35.366 [2024-12-05 13:57:17.856283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:35.366 [2024-12-05 13:57:17.856289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:35.366 [2024-12-05 13:57:17.856295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:35.366 [2024-12-05 13:57:17.856301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:35.366 [2024-12-05 13:57:17.856308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:35.366 [2024-12-05 13:57:17.856314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:35.366 [2024-12-05 13:57:17.856321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:35.366 [2024-12-05 13:57:17.856327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:35.366 [2024-12-05 13:57:17.856670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.366 [2024-12-05 13:57:17.856686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1502c70 with addr=10.0.0.2, port=4420 00:25:35.366 [2024-12-05 13:57:17.856694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502c70 is same with the state(6) to be set 00:25:35.366 [2024-12-05 13:57:17.856850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.366 [2024-12-05 13:57:17.856860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x197d100 with addr=10.0.0.2, port=4420 00:25:35.366 [2024-12-05 13:57:17.856872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d100 is same with the state(6) to be set 00:25:35.366 [2024-12-05 13:57:17.856882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1961d80 (9): Bad file descriptor 00:25:35.367 [2024-12-05 13:57:17.856891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:25:35.367 [2024-12-05 13:57:17.856897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:35.367 [2024-12-05 13:57:17.856903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:35.367 [2024-12-05 13:57:17.856910] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:35.367 [2024-12-05 13:57:17.856917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:25:35.367 [2024-12-05 13:57:17.856923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:25:35.367 [2024-12-05 13:57:17.856929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:25:35.367 [2024-12-05 13:57:17.856935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:25:35.367 [2024-12-05 13:57:17.856941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:35.367 [2024-12-05 13:57:17.856947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:35.367 [2024-12-05 13:57:17.856953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:35.367 [2024-12-05 13:57:17.856959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:35.367 [2024-12-05 13:57:17.856966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:25:35.367 [2024-12-05 13:57:17.856971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:25:35.367 [2024-12-05 13:57:17.856977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:25:35.367 [2024-12-05 13:57:17.856983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:25:35.367 [2024-12-05 13:57:17.857044] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:25:35.367 [2024-12-05 13:57:17.857328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1502c70 (9): Bad file descriptor 00:25:35.367 [2024-12-05 13:57:17.857340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197d100 (9): Bad file descriptor 00:25:35.367 [2024-12-05 13:57:17.857348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:35.367 [2024-12-05 13:57:17.857353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:35.367 [2024-12-05 13:57:17.857360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:35.367 [2024-12-05 13:57:17.857366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:35.367 [2024-12-05 13:57:17.857407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:35.367 [2024-12-05 13:57:17.857418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:35.367 [2024-12-05 13:57:17.857426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:35.367 [2024-12-05 13:57:17.857433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:25:35.367 [2024-12-05 13:57:17.857444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:35.367 [2024-12-05 13:57:17.857452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:25:35.367 [2024-12-05 13:57:17.857459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:35.367 [2024-12-05 13:57:17.857500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:35.367 [2024-12-05 13:57:17.857506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:35.367 [2024-12-05 13:57:17.857513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:35.367 [2024-12-05 13:57:17.857519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:35.367 [2024-12-05 13:57:17.857525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:25:35.367 [2024-12-05 13:57:17.857531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:25:35.367 [2024-12-05 13:57:17.857537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:25:35.367 [2024-12-05 13:57:17.857542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:25:35.367 [2024-12-05 13:57:17.857810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.367 [2024-12-05 13:57:17.857823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504d30 with addr=10.0.0.2, port=4420 00:25:35.367 [2024-12-05 13:57:17.857830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1504d30 is same with the state(6) to be set 00:25:35.367 [2024-12-05 13:57:17.858050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.367 [2024-12-05 13:57:17.858061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14f9200 with addr=10.0.0.2, port=4420 00:25:35.367 [2024-12-05 13:57:17.858068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f9200 is same with the state(6) to be set 00:25:35.367 [2024-12-05 13:57:17.858218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.367 [2024-12-05 13:57:17.858228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15051c0 with addr=10.0.0.2, port=4420 00:25:35.367 [2024-12-05 13:57:17.858235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15051c0 is same with the state(6) to be set 00:25:35.367 [2024-12-05 13:57:17.858386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.367 [2024-12-05 13:57:17.858397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1965820 with addr=10.0.0.2, port=4420 00:25:35.367 [2024-12-05 13:57:17.858404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965820 is same with the state(6) to be set 00:25:35.367 [2024-12-05 13:57:17.858501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.367 [2024-12-05 13:57:17.858511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1419610 with addr=10.0.0.2, port=4420 00:25:35.367 [2024-12-05 13:57:17.858518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1419610 is same with the state(6) to be set 00:25:35.367 [2024-12-05 13:57:17.858752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.367 [2024-12-05 13:57:17.858762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1930240 with addr=10.0.0.2, port=4420 00:25:35.367 [2024-12-05 13:57:17.858769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1930240 is same with the state(6) to be set 00:25:35.367 [2024-12-05 13:57:17.858908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.367 [2024-12-05 13:57:17.858918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1930cf0 with addr=10.0.0.2, port=4420 00:25:35.367 [2024-12-05 13:57:17.858925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1930cf0 is same with the state(6) to be set 00:25:35.367 [2024-12-05 13:57:17.858953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1504d30 (9): Bad file descriptor 00:25:35.367 [2024-12-05 13:57:17.858963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f9200 (9): Bad file descriptor 00:25:35.367 [2024-12-05 13:57:17.858971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15051c0 (9): Bad file descriptor 00:25:35.367 [2024-12-05 13:57:17.858980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965820 (9): Bad file descriptor 00:25:35.367 [2024-12-05 13:57:17.858988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1419610 (9): Bad file descriptor 00:25:35.367 [2024-12-05 13:57:17.858996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1930240 (9): Bad file descriptor 00:25:35.367 [2024-12-05 13:57:17.859003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1930cf0 (9): Bad file descriptor 00:25:35.367 [2024-12-05 13:57:17.859027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:35.367 [2024-12-05 13:57:17.859034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:35.367 [2024-12-05 13:57:17.859041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:35.367 [2024-12-05 13:57:17.859048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:35.367 [2024-12-05 13:57:17.859054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:35.367 [2024-12-05 13:57:17.859060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:35.367 [2024-12-05 13:57:17.859066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:35.367 [2024-12-05 13:57:17.859072] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:35.367 [2024-12-05 13:57:17.859078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:35.368 [2024-12-05 13:57:17.859084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:35.368 [2024-12-05 13:57:17.859090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:35.368 [2024-12-05 13:57:17.859096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:35.368 [2024-12-05 13:57:17.859102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:25:35.368 [2024-12-05 13:57:17.859107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:25:35.368 [2024-12-05 13:57:17.859113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:25:35.368 [2024-12-05 13:57:17.859119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:25:35.368 [2024-12-05 13:57:17.859125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:35.368 [2024-12-05 13:57:17.859130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:35.368 [2024-12-05 13:57:17.859137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:35.368 [2024-12-05 13:57:17.859145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:35.368 [2024-12-05 13:57:17.859151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:25:35.368 [2024-12-05 13:57:17.859156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:25:35.368 [2024-12-05 13:57:17.859162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:25:35.368 [2024-12-05 13:57:17.859168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:25:35.368 [2024-12-05 13:57:17.859174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:25:35.368 [2024-12-05 13:57:17.859180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:35.368 [2024-12-05 13:57:17.859186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:35.368 [2024-12-05 13:57:17.859192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:35.627 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 727012 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 727012 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 727012 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:25:37.006 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:37.007 rmmod nvme_tcp 00:25:37.007 rmmod nvme_fabrics 00:25:37.007 rmmod nvme_keyring 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 726745 ']' 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 726745 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 726745 ']' 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 726745 00:25:37.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (726745) - No such process 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 726745 is not found' 00:25:37.007 Process with pid 726745 is not found 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.007 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:38.911 00:25:38.911 real 0m7.091s 00:25:38.911 user 0m16.063s 00:25:38.911 sys 0m1.353s 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:38.911 ************************************ 00:25:38.911 END TEST nvmf_shutdown_tc3 00:25:38.911 ************************************ 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:38.911 ************************************ 00:25:38.911 START TEST nvmf_shutdown_tc4 00:25:38.911 ************************************ 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:38.911 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:38.912 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:38.912 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:38.912 Found net devices under 0000:86:00.0: cvl_0_0 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:38.912 Found net devices under 0000:86:00.1: cvl_0_1 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.912 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:39.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:25:39.171 00:25:39.171 --- 10.0.0.2 ping statistics --- 00:25:39.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.171 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:25:39.171 00:25:39.171 --- 10.0.0.1 ping statistics --- 00:25:39.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.171 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=728064 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 728064 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 728064 ']' 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.171 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:39.429 [2024-12-05 13:57:21.793563] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:39.429 [2024-12-05 13:57:21.793613] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.429 [2024-12-05 13:57:21.873927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:39.429 [2024-12-05 13:57:21.916293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.429 [2024-12-05 13:57:21.916329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.429 [2024-12-05 13:57:21.916336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.429 [2024-12-05 13:57:21.916342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.429 [2024-12-05 13:57:21.916347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.429 [2024-12-05 13:57:21.917846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.429 [2024-12-05 13:57:21.917954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:39.429 [2024-12-05 13:57:21.918063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.429 [2024-12-05 13:57:21.918064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:40.359 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:40.360 [2024-12-05 13:57:22.666184] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.360 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:40.360 Malloc1 00:25:40.360 [2024-12-05 13:57:22.771905] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.360 Malloc2 00:25:40.360 Malloc3 00:25:40.360 Malloc4 00:25:40.360 Malloc5 00:25:40.617 Malloc6 00:25:40.617 Malloc7 00:25:40.617 Malloc8 00:25:40.617 Malloc9 00:25:40.617 Malloc10 00:25:40.617 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.617 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:40.617 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:40.617 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:40.617 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=728343 00:25:40.617 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:25:40.617 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:25:40.874 [2024-12-05 13:57:23.273021] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:46.143 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:46.143 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 728064 00:25:46.143 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 728064 ']' 00:25:46.143 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 728064 00:25:46.143 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:25:46.143 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:46.143 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 728064 00:25:46.143 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:46.143 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:46.143 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 728064' 00:25:46.143 killing process with pid 728064 00:25:46.143 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 728064 00:25:46.143 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 728064 00:25:46.143 [2024-12-05 13:57:28.260993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e8d20 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.261043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e8d20 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.261051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e8d20 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.261058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e8d20 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.261064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e8d20 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.261070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e8d20 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.261660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e91f0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.261685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e91f0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.262201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e96c0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.262226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e96c0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.262241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e96c0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.262248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e96c0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.262254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e96c0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.262893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e8850 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.262918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e8850 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.262926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e8850 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.262933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e8850 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.262939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e8850 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.262945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e8850 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.266863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13431e0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.266884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13431e0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.266891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13431e0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.266898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13431e0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.267393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13436b0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.267752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342840 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.267773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342840 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.267780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342840 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.268609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1344050 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.269216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13449f0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.269235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13449f0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.269241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13449f0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.269248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13449f0 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.269930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1343b80 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.269949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1343b80 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.269956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1343b80 is same with the state(6) to be set 00:25:46.143 [2024-12-05 13:57:28.269964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1343b80 is same with the state(6) to be set 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 [2024-12-05 13:57:28.271736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.143 starting I/O failed: -6 00:25:46.143 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 [2024-12-05 13:57:28.272628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 [2024-12-05 13:57:28.273605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a060 is same with the state(6) to be set 00:25:46.144 [2024-12-05 13:57:28.273624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a060 is same with the state(6) to be set 00:25:46.144 [2024-12-05 13:57:28.273628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ[2024-12-05 13:57:28.273631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a060 is same with transport error -6 (No such device or address) on qpair id 1 00:25:46.144 the state(6) to be set 00:25:46.144 [2024-12-05 13:57:28.273640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a060 is same with the state(6) to be set 00:25:46.144 [2024-12-05 13:57:28.273646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a060 is same with the state(6) to be set 00:25:46.144 [2024-12-05 13:57:28.273656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a060 is same with the state(6) to be set 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 [2024-12-05 13:57:28.273924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a3e0 is same with Write completed with error (sct=0, sc=8) 00:25:46.144 the state(6) to be set 00:25:46.144 starting I/O failed: -6 00:25:46.144 [2024-12-05 13:57:28.273944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a3e0 is same with the state(6) to be set 00:25:46.144 [2024-12-05 13:57:28.273952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a3e0 is same with the state(6) to be set 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 [2024-12-05 13:57:28.273959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a3e0 is same with starting I/O failed: -6 00:25:46.144 the state(6) to be set 00:25:46.144 [2024-12-05 13:57:28.273966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a3e0 is same with the state(6) to be set 00:25:46.144 [2024-12-05 13:57:28.273973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a3e0 is same with the state(6) to be set 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 [2024-12-05 13:57:28.273979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a3e0 is same with starting I/O failed: -6 00:25:46.144 the state(6) to be set 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.144 Write completed with error (sct=0, sc=8) 00:25:46.144 starting I/O failed: -6 00:25:46.145 [2024-12-05 13:57:28.274253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a760 is same with the state(6) to be set 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 [2024-12-05 13:57:28.274272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a760 is same with the state(6) to be set 00:25:46.145 starting I/O failed: -6 00:25:46.145 [2024-12-05 13:57:28.274278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a760 is same with the state(6) to be set 00:25:46.145 [2024-12-05 13:57:28.274285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a760 is same with the state(6) to be set 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 [2024-12-05 13:57:28.274291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a760 is same with the state(6) to be set 00:25:46.145 starting I/O failed: -6 00:25:46.145 [2024-12-05 13:57:28.274297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a760 is same with the state(6) to be set 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 [2024-12-05 13:57:28.274591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3220 is same with the state(6) to be set 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 [2024-12-05 13:57:28.274608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3220 is same with the state(6) to be set 00:25:46.145 starting I/O failed: -6 00:25:46.145 [2024-12-05 13:57:28.274615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3220 is same with the state(6) to be set 00:25:46.145 [2024-12-05 13:57:28.274622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3220 is same with the state(6) to be set 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 [2024-12-05 13:57:28.274628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3220 is same with the state(6) to be set 00:25:46.145 starting I/O failed: -6 00:25:46.145 [2024-12-05 13:57:28.274635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3220 is same with the state(6) to be set 00:25:46.145 [2024-12-05 13:57:28.274641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3220 is same with the state(6) to be set 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 [2024-12-05 13:57:28.274647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3220 is same with the state(6) to be set 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 [2024-12-05 13:57:28.275135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:46.145 NVMe io qpair process completion error 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 [2024-12-05 13:57:28.276206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 starting I/O failed: -6 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.145 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 [2024-12-05 13:57:28.277084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 [2024-12-05 13:57:28.278063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 [2024-12-05 13:57:28.279833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:46.146 NVMe io qpair process completion error 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 Write completed with error (sct=0, sc=8) 00:25:46.146 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 [2024-12-05 13:57:28.280784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 [2024-12-05 13:57:28.281554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 Write completed with error (sct=0, sc=8) 00:25:46.147 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 [2024-12-05 13:57:28.282620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 [2024-12-05 13:57:28.284450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:46.148 NVMe io qpair process completion error 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 [2024-12-05 13:57:28.285697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:46.148 starting I/O failed: -6 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 Write completed with error (sct=0, sc=8) 00:25:46.148 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 [2024-12-05 13:57:28.286572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:46.149 starting I/O failed: -6 00:25:46.149 starting I/O failed: -6 00:25:46.149 starting I/O failed: -6 00:25:46.149 starting I/O failed: -6 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 [2024-12-05 13:57:28.287819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 starting I/O failed: -6 00:25:46.149 [2024-12-05 13:57:28.290356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:46.149 NVMe io qpair process completion error 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.149 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 [2024-12-05 13:57:28.291516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 [2024-12-05 13:57:28.292407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 [2024-12-05 13:57:28.293419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.150 Write completed with error (sct=0, sc=8) 00:25:46.150 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 [2024-12-05 13:57:28.298126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:46.151 NVMe io qpair process completion error 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 [2024-12-05 13:57:28.299094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.151 starting I/O failed: -6 00:25:46.151 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 [2024-12-05 13:57:28.299987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 [2024-12-05 13:57:28.301009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.152 Write completed with error (sct=0, sc=8) 00:25:46.152 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 [2024-12-05 13:57:28.303128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:46.153 NVMe io qpair process completion error 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 [2024-12-05 13:57:28.304079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 [2024-12-05 13:57:28.304979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.153 Write completed with error (sct=0, sc=8) 00:25:46.153 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 [2024-12-05 13:57:28.305971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 [2024-12-05 13:57:28.308153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:46.154 NVMe io qpair process completion error 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 Write completed with error (sct=0, sc=8) 00:25:46.154 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.155 Write completed with error (sct=0, sc=8) 00:25:46.155 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.156 Write completed with error (sct=0, sc=8) 00:25:46.156 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 starting I/O failed: -6 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.157 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 [2024-12-05 13:57:28.323197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 [2024-12-05 13:57:28.324213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 [2024-12-05 13:57:28.325257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.158 Write completed with error (sct=0, sc=8) 00:25:46.158 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 Write completed with error (sct=0, sc=8) 00:25:46.159 starting I/O failed: -6 00:25:46.159 [2024-12-05 13:57:28.327119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:46.159 NVMe io qpair process completion error 00:25:46.159 Initializing NVMe Controllers 00:25:46.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:25:46.159 Controller IO queue size 128, less than required. 00:25:46.159 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:25:46.159 Controller IO queue size 128, less than required. 00:25:46.159 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:25:46.159 Controller IO queue size 128, less than required. 00:25:46.159 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:25:46.159 Controller IO queue size 128, less than required. 00:25:46.159 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:25:46.159 Controller IO queue size 128, less than required. 00:25:46.159 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:46.159 Controller IO queue size 128, less than required. 00:25:46.159 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:25:46.159 Controller IO queue size 128, less than required. 00:25:46.159 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:25:46.159 Controller IO queue size 128, less than required. 00:25:46.159 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:25:46.159 Controller IO queue size 128, less than required. 00:25:46.159 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:25:46.159 Controller IO queue size 128, less than required. 00:25:46.159 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:25:46.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:25:46.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:25:46.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:25:46.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:25:46.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:46.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:25:46.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:25:46.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:25:46.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:25:46.159 Initialization complete. Launching workers. 00:25:46.159 ======================================================== 00:25:46.159 Latency(us) 00:25:46.159 Device Information : IOPS MiB/s Average min max 00:25:46.159 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2241.00 96.29 57127.45 951.65 121781.39 00:25:46.159 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2204.11 94.71 57393.05 949.67 110418.47 00:25:46.159 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2217.70 95.29 57051.65 885.06 107807.69 00:25:46.159 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2206.70 94.82 57347.72 773.48 106962.98 00:25:46.159 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2195.48 94.34 57660.15 510.84 104384.18 00:25:46.159 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2146.08 92.21 59012.61 911.87 103016.11 00:25:46.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2141.55 92.02 59187.93 690.74 107520.52 00:25:46.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2163.55 92.97 58602.58 899.64 96287.86 00:25:46.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2156.86 92.68 58794.92 764.31 99974.27 00:25:46.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2191.81 94.18 57926.93 920.16 119611.01 00:25:46.160 ======================================================== 00:25:46.160 Total : 21864.82 939.50 57999.70 510.84 121781.39 00:25:46.160 00:25:46.160 [2024-12-05 13:57:28.330142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990ef0 is same with the state(6) to be set 00:25:46.160 [2024-12-05 13:57:28.330185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992900 is same with the state(6) to be set 00:25:46.160 [2024-12-05 13:57:28.330215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990890 is same with the state(6) to be set 00:25:46.160 [2024-12-05 13:57:28.330243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991410 is same with the state(6) to be set 00:25:46.160 [2024-12-05 13:57:28.330272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991a70 is same with the state(6) to be set 00:25:46.160 [2024-12-05 13:57:28.330299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992720 is same with the state(6) to be set 00:25:46.160 [2024-12-05 13:57:28.330325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990bc0 is same with the state(6) to be set 00:25:46.160 [2024-12-05 13:57:28.330352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991740 is same with the state(6) to be set 00:25:46.160 [2024-12-05 13:57:28.330386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992ae0 is same with the state(6) to be set 00:25:46.160 [2024-12-05 13:57:28.330414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990560 is same with the state(6) to be set 00:25:46.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:25:46.160 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 728343 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 728343 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 728343 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.098 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.098 rmmod nvme_tcp 00:25:47.358 rmmod nvme_fabrics 00:25:47.358 rmmod nvme_keyring 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 728064 ']' 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 728064 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 728064 ']' 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 728064 00:25:47.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (728064) - No such process 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 728064 is not found' 00:25:47.358 Process with pid 728064 is not found 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.358 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.263 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:49.263 00:25:49.263 real 0m10.404s 00:25:49.263 user 0m27.329s 00:25:49.263 sys 0m5.351s 00:25:49.263 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:49.263 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:49.263 ************************************ 00:25:49.263 END TEST nvmf_shutdown_tc4 00:25:49.263 ************************************ 00:25:49.263 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:25:49.263 00:25:49.263 real 0m40.986s 00:25:49.263 user 1m40.073s 00:25:49.263 sys 0m14.202s 00:25:49.263 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:49.263 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:49.263 ************************************ 00:25:49.263 END TEST nvmf_shutdown 00:25:49.263 ************************************ 00:25:49.522 13:57:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:49.522 13:57:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:49.522 13:57:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:49.522 13:57:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:49.522 ************************************ 00:25:49.522 START TEST nvmf_nsid 00:25:49.522 ************************************ 00:25:49.522 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:49.522 * Looking for test storage... 00:25:49.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:49.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.522 --rc genhtml_branch_coverage=1 00:25:49.522 --rc genhtml_function_coverage=1 00:25:49.522 --rc genhtml_legend=1 00:25:49.522 --rc geninfo_all_blocks=1 00:25:49.522 --rc geninfo_unexecuted_blocks=1 00:25:49.522 00:25:49.522 ' 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:49.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.522 --rc genhtml_branch_coverage=1 00:25:49.522 --rc genhtml_function_coverage=1 00:25:49.522 --rc genhtml_legend=1 00:25:49.522 --rc geninfo_all_blocks=1 00:25:49.522 --rc geninfo_unexecuted_blocks=1 00:25:49.522 00:25:49.522 ' 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:49.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.522 --rc genhtml_branch_coverage=1 00:25:49.522 --rc genhtml_function_coverage=1 00:25:49.522 --rc genhtml_legend=1 00:25:49.522 --rc geninfo_all_blocks=1 00:25:49.522 --rc geninfo_unexecuted_blocks=1 00:25:49.522 00:25:49.522 ' 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:49.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.522 --rc genhtml_branch_coverage=1 00:25:49.522 --rc genhtml_function_coverage=1 00:25:49.522 --rc genhtml_legend=1 00:25:49.522 --rc geninfo_all_blocks=1 00:25:49.522 --rc geninfo_unexecuted_blocks=1 00:25:49.522 00:25:49.522 ' 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.522 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:49.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:25:49.781 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:56.349 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:56.349 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.349 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:56.350 Found net devices under 0000:86:00.0: cvl_0_0 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:56.350 Found net devices under 0000:86:00.1: cvl_0_1 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:56.350 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:56.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:25:56.350 00:25:56.350 --- 10.0.0.2 ping statistics --- 00:25:56.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.350 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:56.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:25:56.350 00:25:56.350 --- 10.0.0.1 ping statistics --- 00:25:56.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.350 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=732943 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 732943 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 732943 ']' 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:56.350 [2024-12-05 13:57:38.128582] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:56.350 [2024-12-05 13:57:38.128627] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.350 [2024-12-05 13:57:38.207954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.350 [2024-12-05 13:57:38.248094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.350 [2024-12-05 13:57:38.248128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.350 [2024-12-05 13:57:38.248135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.350 [2024-12-05 13:57:38.248140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.350 [2024-12-05 13:57:38.248145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.350 [2024-12-05 13:57:38.248697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=733038 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.350 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=12ab055c-7f37-40a6-956f-e72ece9ea80b 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=70182a6c-90ee-4219-8a20-04f3cca8c698 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=5dfb2545-7395-4f19-93f1-bc9885952364 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:56.351 null0 00:25:56.351 null1 00:25:56.351 [2024-12-05 13:57:38.438505] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:56.351 [2024-12-05 13:57:38.438549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid733038 ] 00:25:56.351 null2 00:25:56.351 [2024-12-05 13:57:38.443784] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.351 [2024-12-05 13:57:38.467971] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 733038 /var/tmp/tgt2.sock 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 733038 ']' 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:25:56.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:56.351 [2024-12-05 13:57:38.514712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.351 [2024-12-05 13:57:38.555086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:56.351 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:25:56.610 [2024-12-05 13:57:39.080805] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.610 [2024-12-05 13:57:39.096919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:25:56.610 nvme0n1 nvme0n2 00:25:56.610 nvme1n1 00:25:56.610 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:25:56.610 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:25:56.610 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:57.984 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:25:57.984 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:25:57.984 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:25:57.984 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:25:57.984 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:25:57.984 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:25:57.984 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:25:57.984 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:57.984 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:57.984 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:57.984 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:25:57.984 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:25:57.984 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 12ab055c-7f37-40a6-956f-e72ece9ea80b 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=12ab055c7f3740a6956fe72ece9ea80b 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 12AB055C7F3740A6956FE72ECE9EA80B 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 12AB055C7F3740A6956FE72ECE9EA80B == \1\2\A\B\0\5\5\C\7\F\3\7\4\0\A\6\9\5\6\F\E\7\2\E\C\E\9\E\A\8\0\B ]] 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 70182a6c-90ee-4219-8a20-04f3cca8c698 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=70182a6c90ee42198a2004f3cca8c698 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 70182A6C90EE42198A2004F3CCA8C698 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 70182A6C90EE42198A2004F3CCA8C698 == \7\0\1\8\2\A\6\C\9\0\E\E\4\2\1\9\8\A\2\0\0\4\F\3\C\C\A\8\C\6\9\8 ]] 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 5dfb2545-7395-4f19-93f1-bc9885952364 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5dfb254573954f1993f1bc9885952364 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5DFB254573954F1993F1BC9885952364 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 5DFB254573954F1993F1BC9885952364 == \5\D\F\B\2\5\4\5\7\3\9\5\4\F\1\9\9\3\F\1\B\C\9\8\8\5\9\5\2\3\6\4 ]] 00:25:58.918 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:25:59.176 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:25:59.176 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:25:59.176 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 733038 00:25:59.176 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 733038 ']' 00:25:59.176 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 733038 00:25:59.176 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:59.176 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:59.176 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 733038 00:25:59.176 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:59.176 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:59.177 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 733038' 00:25:59.177 killing process with pid 733038 00:25:59.177 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 733038 00:25:59.177 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 733038 00:25:59.435 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:25:59.435 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:59.435 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:25:59.435 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:59.435 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:25:59.435 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:59.435 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:59.435 rmmod nvme_tcp 00:25:59.435 rmmod nvme_fabrics 00:25:59.694 rmmod nvme_keyring 00:25:59.694 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:59.694 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:25:59.694 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:25:59.694 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 732943 ']' 00:25:59.694 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 732943 00:25:59.694 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 732943 ']' 00:25:59.694 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 732943 00:25:59.694 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:59.694 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:59.694 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 732943 00:25:59.694 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:59.694 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 732943' 00:25:59.695 killing process with pid 732943 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 732943 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 732943 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.695 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.229 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:02.229 00:26:02.229 real 0m12.403s 00:26:02.229 user 0m9.652s 00:26:02.229 sys 0m5.511s 00:26:02.229 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.229 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:02.229 ************************************ 00:26:02.229 END TEST nvmf_nsid 00:26:02.229 ************************************ 00:26:02.229 13:57:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:02.229 00:26:02.229 real 12m1.119s 00:26:02.229 user 25m50.307s 00:26:02.229 sys 3m40.064s 00:26:02.229 13:57:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.229 13:57:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:02.229 ************************************ 00:26:02.229 END TEST nvmf_target_extra 00:26:02.229 ************************************ 00:26:02.229 13:57:44 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:02.229 13:57:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:02.229 13:57:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.229 13:57:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:02.229 ************************************ 00:26:02.229 START TEST nvmf_host 00:26:02.229 ************************************ 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:02.229 * Looking for test storage... 00:26:02.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:02.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.229 --rc genhtml_branch_coverage=1 00:26:02.229 --rc genhtml_function_coverage=1 00:26:02.229 --rc genhtml_legend=1 00:26:02.229 --rc geninfo_all_blocks=1 00:26:02.229 --rc geninfo_unexecuted_blocks=1 00:26:02.229 00:26:02.229 ' 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:02.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.229 --rc genhtml_branch_coverage=1 00:26:02.229 --rc genhtml_function_coverage=1 00:26:02.229 --rc genhtml_legend=1 00:26:02.229 --rc geninfo_all_blocks=1 00:26:02.229 --rc geninfo_unexecuted_blocks=1 00:26:02.229 00:26:02.229 ' 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:02.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.229 --rc genhtml_branch_coverage=1 00:26:02.229 --rc genhtml_function_coverage=1 00:26:02.229 --rc genhtml_legend=1 00:26:02.229 --rc geninfo_all_blocks=1 00:26:02.229 --rc geninfo_unexecuted_blocks=1 00:26:02.229 00:26:02.229 ' 00:26:02.229 13:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:02.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.229 --rc genhtml_branch_coverage=1 00:26:02.230 --rc genhtml_function_coverage=1 00:26:02.230 --rc genhtml_legend=1 00:26:02.230 --rc geninfo_all_blocks=1 00:26:02.230 --rc geninfo_unexecuted_blocks=1 00:26:02.230 00:26:02.230 ' 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.230 ************************************ 00:26:02.230 START TEST nvmf_multicontroller 00:26:02.230 ************************************ 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:02.230 * Looking for test storage... 00:26:02.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:26:02.230 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.490 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:02.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.491 --rc genhtml_branch_coverage=1 00:26:02.491 --rc genhtml_function_coverage=1 00:26:02.491 --rc genhtml_legend=1 00:26:02.491 --rc geninfo_all_blocks=1 00:26:02.491 --rc geninfo_unexecuted_blocks=1 00:26:02.491 00:26:02.491 ' 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:02.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.491 --rc genhtml_branch_coverage=1 00:26:02.491 --rc genhtml_function_coverage=1 00:26:02.491 --rc genhtml_legend=1 00:26:02.491 --rc geninfo_all_blocks=1 00:26:02.491 --rc geninfo_unexecuted_blocks=1 00:26:02.491 00:26:02.491 ' 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:02.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.491 --rc genhtml_branch_coverage=1 00:26:02.491 --rc genhtml_function_coverage=1 00:26:02.491 --rc genhtml_legend=1 00:26:02.491 --rc geninfo_all_blocks=1 00:26:02.491 --rc geninfo_unexecuted_blocks=1 00:26:02.491 00:26:02.491 ' 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:02.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.491 --rc genhtml_branch_coverage=1 00:26:02.491 --rc genhtml_function_coverage=1 00:26:02.491 --rc genhtml_legend=1 00:26:02.491 --rc geninfo_all_blocks=1 00:26:02.491 --rc geninfo_unexecuted_blocks=1 00:26:02.491 00:26:02.491 ' 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.491 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.492 13:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:09.074 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:09.075 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:09.075 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:09.075 Found net devices under 0000:86:00.0: cvl_0_0 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:09.075 Found net devices under 0000:86:00.1: cvl_0_1 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:09.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:09.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:26:09.075 00:26:09.075 --- 10.0.0.2 ping statistics --- 00:26:09.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.075 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:09.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:09.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:26:09.075 00:26:09.075 --- 10.0.0.1 ping statistics --- 00:26:09.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.075 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=737185 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 737185 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 737185 ']' 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.075 13:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.075 [2024-12-05 13:57:50.885434] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:26:09.075 [2024-12-05 13:57:50.885482] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.075 [2024-12-05 13:57:50.962158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:09.075 [2024-12-05 13:57:51.004444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.075 [2024-12-05 13:57:51.004480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.075 [2024-12-05 13:57:51.004487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.075 [2024-12-05 13:57:51.004493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.075 [2024-12-05 13:57:51.004498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.075 [2024-12-05 13:57:51.005935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.075 [2024-12-05 13:57:51.006039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.075 [2024-12-05 13:57:51.006039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:09.075 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 [2024-12-05 13:57:51.143416] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 Malloc0 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 [2024-12-05 13:57:51.212052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 [2024-12-05 13:57:51.219967] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 Malloc1 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=737372 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 737372 /var/tmp/bdevperf.sock 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 737372 ']' 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:09.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.076 NVMe0n1 00:26:09.076 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.335 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:09.335 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:09.335 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.335 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.336 1 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.336 request: 00:26:09.336 { 00:26:09.336 "name": "NVMe0", 00:26:09.336 "trtype": "tcp", 00:26:09.336 "traddr": "10.0.0.2", 00:26:09.336 "adrfam": "ipv4", 00:26:09.336 "trsvcid": "4420", 00:26:09.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.336 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:09.336 "hostaddr": "10.0.0.1", 00:26:09.336 "prchk_reftag": false, 00:26:09.336 "prchk_guard": false, 00:26:09.336 "hdgst": false, 00:26:09.336 "ddgst": false, 00:26:09.336 "allow_unrecognized_csi": false, 00:26:09.336 "method": "bdev_nvme_attach_controller", 00:26:09.336 "req_id": 1 00:26:09.336 } 00:26:09.336 Got JSON-RPC error response 00:26:09.336 response: 00:26:09.336 { 00:26:09.336 "code": -114, 00:26:09.336 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:09.336 } 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.336 request: 00:26:09.336 { 00:26:09.336 "name": "NVMe0", 00:26:09.336 "trtype": "tcp", 00:26:09.336 "traddr": "10.0.0.2", 00:26:09.336 "adrfam": "ipv4", 00:26:09.336 "trsvcid": "4420", 00:26:09.336 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:09.336 "hostaddr": "10.0.0.1", 00:26:09.336 "prchk_reftag": false, 00:26:09.336 "prchk_guard": false, 00:26:09.336 "hdgst": false, 00:26:09.336 "ddgst": false, 00:26:09.336 "allow_unrecognized_csi": false, 00:26:09.336 "method": "bdev_nvme_attach_controller", 00:26:09.336 "req_id": 1 00:26:09.336 } 00:26:09.336 Got JSON-RPC error response 00:26:09.336 response: 00:26:09.336 { 00:26:09.336 "code": -114, 00:26:09.336 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:09.336 } 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.336 request: 00:26:09.336 { 00:26:09.336 "name": "NVMe0", 00:26:09.336 "trtype": "tcp", 00:26:09.336 "traddr": "10.0.0.2", 00:26:09.336 "adrfam": "ipv4", 00:26:09.336 "trsvcid": "4420", 00:26:09.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.336 "hostaddr": "10.0.0.1", 00:26:09.336 "prchk_reftag": false, 00:26:09.336 "prchk_guard": false, 00:26:09.336 "hdgst": false, 00:26:09.336 "ddgst": false, 00:26:09.336 "multipath": "disable", 00:26:09.336 "allow_unrecognized_csi": false, 00:26:09.336 "method": "bdev_nvme_attach_controller", 00:26:09.336 "req_id": 1 00:26:09.336 } 00:26:09.336 Got JSON-RPC error response 00:26:09.336 response: 00:26:09.336 { 00:26:09.336 "code": -114, 00:26:09.336 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:26:09.336 } 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.336 request: 00:26:09.336 { 00:26:09.336 "name": "NVMe0", 00:26:09.336 "trtype": "tcp", 00:26:09.336 "traddr": "10.0.0.2", 00:26:09.336 "adrfam": "ipv4", 00:26:09.336 "trsvcid": "4420", 00:26:09.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.336 "hostaddr": "10.0.0.1", 00:26:09.336 "prchk_reftag": false, 00:26:09.336 "prchk_guard": false, 00:26:09.336 "hdgst": false, 00:26:09.336 "ddgst": false, 00:26:09.336 "multipath": "failover", 00:26:09.336 "allow_unrecognized_csi": false, 00:26:09.336 "method": "bdev_nvme_attach_controller", 00:26:09.336 "req_id": 1 00:26:09.336 } 00:26:09.336 Got JSON-RPC error response 00:26:09.336 response: 00:26:09.336 { 00:26:09.336 "code": -114, 00:26:09.336 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:09.336 } 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:09.336 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.337 NVMe0n1 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.337 13:57:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.595 00:26:09.595 13:57:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.595 13:57:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:09.595 13:57:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:09.595 13:57:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.595 13:57:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:09.595 13:57:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.595 13:57:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:09.595 13:57:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:10.652 { 00:26:10.652 "results": [ 00:26:10.652 { 00:26:10.652 "job": "NVMe0n1", 00:26:10.652 "core_mask": "0x1", 00:26:10.652 "workload": "write", 00:26:10.652 "status": "finished", 00:26:10.652 "queue_depth": 128, 00:26:10.652 "io_size": 4096, 00:26:10.652 "runtime": 1.004913, 00:26:10.652 "iops": 25036.993252152177, 00:26:10.652 "mibps": 97.80075489121944, 00:26:10.652 "io_failed": 0, 00:26:10.652 "io_timeout": 0, 00:26:10.652 "avg_latency_us": 5102.151127261715, 00:26:10.652 "min_latency_us": 3027.1390476190477, 00:26:10.652 "max_latency_us": 12295.801904761905 00:26:10.652 } 00:26:10.652 ], 00:26:10.652 "core_count": 1 00:26:10.652 } 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 737372 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 737372 ']' 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 737372 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 737372 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 737372' 00:26:10.937 killing process with pid 737372 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 737372 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 737372 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:10.937 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:26:10.938 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:26:10.938 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:10.938 [2024-12-05 13:57:51.319442] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:26:10.938 [2024-12-05 13:57:51.319492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737372 ] 00:26:10.938 [2024-12-05 13:57:51.396116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.938 [2024-12-05 13:57:51.438571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.938 [2024-12-05 13:57:52.065142] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 6ceabfb8-0e73-45ba-9c55-d037a5cffba7 already exists 00:26:10.938 [2024-12-05 13:57:52.065168] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:6ceabfb8-0e73-45ba-9c55-d037a5cffba7 alias for bdev NVMe1n1 00:26:10.938 [2024-12-05 13:57:52.065176] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:10.938 Running I/O for 1 seconds... 00:26:10.938 24968.00 IOPS, 97.53 MiB/s 00:26:10.938 Latency(us) 00:26:10.938 [2024-12-05T12:57:53.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.938 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:10.938 NVMe0n1 : 1.00 25036.99 97.80 0.00 0.00 5102.15 3027.14 12295.80 00:26:10.938 [2024-12-05T12:57:53.525Z] =================================================================================================================== 00:26:10.938 [2024-12-05T12:57:53.525Z] Total : 25036.99 97.80 0.00 0.00 5102.15 3027.14 12295.80 00:26:10.938 Received shutdown signal, test time was about 1.000000 seconds 00:26:10.938 00:26:10.938 Latency(us) 00:26:10.938 [2024-12-05T12:57:53.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.938 [2024-12-05T12:57:53.525Z] =================================================================================================================== 00:26:10.938 [2024-12-05T12:57:53.525Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:10.938 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:10.938 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:10.938 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:10.938 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:26:10.938 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:10.938 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:26:10.938 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:10.938 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:26:10.938 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:10.938 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:10.938 rmmod nvme_tcp 00:26:10.938 rmmod nvme_fabrics 00:26:10.938 rmmod nvme_keyring 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 737185 ']' 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 737185 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 737185 ']' 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 737185 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 737185 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 737185' 00:26:11.197 killing process with pid 737185 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 737185 00:26:11.197 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 737185 00:26:11.456 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:11.456 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:11.456 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:11.456 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:26:11.456 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:26:11.456 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:26:11.456 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:11.456 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:11.456 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:11.456 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.456 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.456 13:57:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.359 13:57:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:13.359 00:26:13.359 real 0m11.197s 00:26:13.359 user 0m12.299s 00:26:13.359 sys 0m5.248s 00:26:13.359 13:57:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:13.359 13:57:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:13.359 ************************************ 00:26:13.359 END TEST nvmf_multicontroller 00:26:13.359 ************************************ 00:26:13.359 13:57:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:13.359 13:57:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:13.359 13:57:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:13.359 13:57:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.359 ************************************ 00:26:13.359 START TEST nvmf_aer 00:26:13.359 ************************************ 00:26:13.359 13:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:13.622 * Looking for test storage... 00:26:13.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:13.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.622 --rc genhtml_branch_coverage=1 00:26:13.622 --rc genhtml_function_coverage=1 00:26:13.622 --rc genhtml_legend=1 00:26:13.622 --rc geninfo_all_blocks=1 00:26:13.622 --rc geninfo_unexecuted_blocks=1 00:26:13.622 00:26:13.622 ' 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:13.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.622 --rc genhtml_branch_coverage=1 00:26:13.622 --rc genhtml_function_coverage=1 00:26:13.622 --rc genhtml_legend=1 00:26:13.622 --rc geninfo_all_blocks=1 00:26:13.622 --rc geninfo_unexecuted_blocks=1 00:26:13.622 00:26:13.622 ' 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:13.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.622 --rc genhtml_branch_coverage=1 00:26:13.622 --rc genhtml_function_coverage=1 00:26:13.622 --rc genhtml_legend=1 00:26:13.622 --rc geninfo_all_blocks=1 00:26:13.622 --rc geninfo_unexecuted_blocks=1 00:26:13.622 00:26:13.622 ' 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:13.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.622 --rc genhtml_branch_coverage=1 00:26:13.622 --rc genhtml_function_coverage=1 00:26:13.622 --rc genhtml_legend=1 00:26:13.622 --rc geninfo_all_blocks=1 00:26:13.622 --rc geninfo_unexecuted_blocks=1 00:26:13.622 00:26:13.622 ' 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.622 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:13.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:26:13.623 13:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:20.187 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:20.187 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:20.187 Found net devices under 0000:86:00.0: cvl_0_0 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:20.187 Found net devices under 0000:86:00.1: cvl_0_1 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:20.187 13:58:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:20.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:26:20.187 00:26:20.187 --- 10.0.0.2 ping statistics --- 00:26:20.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.187 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:20.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:26:20.187 00:26:20.187 --- 10.0.0.1 ping statistics --- 00:26:20.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.187 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:20.187 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=741156 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 741156 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 741156 ']' 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.188 [2024-12-05 13:58:02.135776] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:26:20.188 [2024-12-05 13:58:02.135821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.188 [2024-12-05 13:58:02.214690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.188 [2024-12-05 13:58:02.258548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.188 [2024-12-05 13:58:02.258581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.188 [2024-12-05 13:58:02.258588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.188 [2024-12-05 13:58:02.258594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.188 [2024-12-05 13:58:02.258599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.188 [2024-12-05 13:58:02.260186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.188 [2024-12-05 13:58:02.260294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:20.188 [2024-12-05 13:58:02.260430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.188 [2024-12-05 13:58:02.260431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.188 [2024-12-05 13:58:02.399008] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.188 Malloc0 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.188 [2024-12-05 13:58:02.469565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.188 [ 00:26:20.188 { 00:26:20.188 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:20.188 "subtype": "Discovery", 00:26:20.188 "listen_addresses": [], 00:26:20.188 "allow_any_host": true, 00:26:20.188 "hosts": [] 00:26:20.188 }, 00:26:20.188 { 00:26:20.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.188 "subtype": "NVMe", 00:26:20.188 "listen_addresses": [ 00:26:20.188 { 00:26:20.188 "trtype": "TCP", 00:26:20.188 "adrfam": "IPv4", 00:26:20.188 "traddr": "10.0.0.2", 00:26:20.188 "trsvcid": "4420" 00:26:20.188 } 00:26:20.188 ], 00:26:20.188 "allow_any_host": true, 00:26:20.188 "hosts": [], 00:26:20.188 "serial_number": "SPDK00000000000001", 00:26:20.188 "model_number": "SPDK bdev Controller", 00:26:20.188 "max_namespaces": 2, 00:26:20.188 "min_cntlid": 1, 00:26:20.188 "max_cntlid": 65519, 00:26:20.188 "namespaces": [ 00:26:20.188 { 00:26:20.188 "nsid": 1, 00:26:20.188 "bdev_name": "Malloc0", 00:26:20.188 "name": "Malloc0", 00:26:20.188 "nguid": "17503E7C804246E6A65FD4A51E912A38", 00:26:20.188 "uuid": "17503e7c-8042-46e6-a65f-d4a51e912a38" 00:26:20.188 } 00:26:20.188 ] 00:26:20.188 } 00:26:20.188 ] 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=741317 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.188 Malloc1 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.188 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.188 Asynchronous Event Request test 00:26:20.188 Attaching to 10.0.0.2 00:26:20.188 Attached to 10.0.0.2 00:26:20.188 Registering asynchronous event callbacks... 00:26:20.188 Starting namespace attribute notice tests for all controllers... 00:26:20.188 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:20.188 aer_cb - Changed Namespace 00:26:20.188 Cleaning up... 00:26:20.188 [ 00:26:20.188 { 00:26:20.188 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:20.188 "subtype": "Discovery", 00:26:20.188 "listen_addresses": [], 00:26:20.188 "allow_any_host": true, 00:26:20.188 "hosts": [] 00:26:20.188 }, 00:26:20.188 { 00:26:20.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.188 "subtype": "NVMe", 00:26:20.188 "listen_addresses": [ 00:26:20.188 { 00:26:20.188 "trtype": "TCP", 00:26:20.188 "adrfam": "IPv4", 00:26:20.188 "traddr": "10.0.0.2", 00:26:20.188 "trsvcid": "4420" 00:26:20.188 } 00:26:20.188 ], 00:26:20.188 "allow_any_host": true, 00:26:20.188 "hosts": [], 00:26:20.189 "serial_number": "SPDK00000000000001", 00:26:20.189 "model_number": "SPDK bdev Controller", 00:26:20.189 "max_namespaces": 2, 00:26:20.189 "min_cntlid": 1, 00:26:20.189 "max_cntlid": 65519, 00:26:20.189 "namespaces": [ 00:26:20.189 { 00:26:20.189 "nsid": 1, 00:26:20.189 "bdev_name": "Malloc0", 00:26:20.189 "name": "Malloc0", 00:26:20.189 "nguid": "17503E7C804246E6A65FD4A51E912A38", 00:26:20.189 "uuid": "17503e7c-8042-46e6-a65f-d4a51e912a38" 00:26:20.189 }, 00:26:20.189 { 00:26:20.189 "nsid": 2, 00:26:20.189 "bdev_name": "Malloc1", 00:26:20.189 "name": "Malloc1", 00:26:20.189 "nguid": "411FE8C3A997406CAE7E355148751A02", 00:26:20.189 "uuid": "411fe8c3-a997-406c-ae7e-355148751a02" 00:26:20.189 } 00:26:20.189 ] 00:26:20.189 } 00:26:20.189 ] 00:26:20.189 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.189 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 741317 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:20.446 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:20.447 rmmod nvme_tcp 00:26:20.447 rmmod nvme_fabrics 00:26:20.447 rmmod nvme_keyring 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 741156 ']' 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 741156 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 741156 ']' 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 741156 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 741156 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 741156' 00:26:20.447 killing process with pid 741156 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 741156 00:26:20.447 13:58:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 741156 00:26:20.705 13:58:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:20.705 13:58:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:20.705 13:58:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:20.705 13:58:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:26:20.705 13:58:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:26:20.705 13:58:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:20.705 13:58:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:26:20.705 13:58:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:20.705 13:58:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:20.705 13:58:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.705 13:58:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.705 13:58:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.609 13:58:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:22.609 00:26:22.609 real 0m9.231s 00:26:22.609 user 0m5.097s 00:26:22.609 sys 0m4.877s 00:26:22.609 13:58:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.609 13:58:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:22.609 ************************************ 00:26:22.609 END TEST nvmf_aer 00:26:22.609 ************************************ 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.868 ************************************ 00:26:22.868 START TEST nvmf_async_init 00:26:22.868 ************************************ 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:22.868 * Looking for test storage... 00:26:22.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.868 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:22.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.868 --rc genhtml_branch_coverage=1 00:26:22.868 --rc genhtml_function_coverage=1 00:26:22.869 --rc genhtml_legend=1 00:26:22.869 --rc geninfo_all_blocks=1 00:26:22.869 --rc geninfo_unexecuted_blocks=1 00:26:22.869 00:26:22.869 ' 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:22.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.869 --rc genhtml_branch_coverage=1 00:26:22.869 --rc genhtml_function_coverage=1 00:26:22.869 --rc genhtml_legend=1 00:26:22.869 --rc geninfo_all_blocks=1 00:26:22.869 --rc geninfo_unexecuted_blocks=1 00:26:22.869 00:26:22.869 ' 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:22.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.869 --rc genhtml_branch_coverage=1 00:26:22.869 --rc genhtml_function_coverage=1 00:26:22.869 --rc genhtml_legend=1 00:26:22.869 --rc geninfo_all_blocks=1 00:26:22.869 --rc geninfo_unexecuted_blocks=1 00:26:22.869 00:26:22.869 ' 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:22.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.869 --rc genhtml_branch_coverage=1 00:26:22.869 --rc genhtml_function_coverage=1 00:26:22.869 --rc genhtml_legend=1 00:26:22.869 --rc geninfo_all_blocks=1 00:26:22.869 --rc geninfo_unexecuted_blocks=1 00:26:22.869 00:26:22.869 ' 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:22.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:22.869 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f615d78ea47c4e62832dc6c0913e5593 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:26:23.127 13:58:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:29.697 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:29.698 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:29.698 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:29.698 Found net devices under 0000:86:00.0: cvl_0_0 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:29.698 Found net devices under 0000:86:00.1: cvl_0_1 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:29.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:26:29.698 00:26:29.698 --- 10.0.0.2 ping statistics --- 00:26:29.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.698 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:26:29.698 00:26:29.698 --- 10.0.0.1 ping statistics --- 00:26:29.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.698 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.698 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=744922 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 744922 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 744922 ']' 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.699 [2024-12-05 13:58:11.432398] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:26:29.699 [2024-12-05 13:58:11.432441] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.699 [2024-12-05 13:58:11.511842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.699 [2024-12-05 13:58:11.552132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.699 [2024-12-05 13:58:11.552167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.699 [2024-12-05 13:58:11.552174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.699 [2024-12-05 13:58:11.552179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.699 [2024-12-05 13:58:11.552184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.699 [2024-12-05 13:58:11.552759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.699 [2024-12-05 13:58:11.688590] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.699 null0 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f615d78ea47c4e62832dc6c0913e5593 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.699 [2024-12-05 13:58:11.732821] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.699 nvme0n1 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.699 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.699 [ 00:26:29.699 { 00:26:29.699 "name": "nvme0n1", 00:26:29.699 "aliases": [ 00:26:29.699 "f615d78e-a47c-4e62-832d-c6c0913e5593" 00:26:29.699 ], 00:26:29.699 "product_name": "NVMe disk", 00:26:29.699 "block_size": 512, 00:26:29.699 "num_blocks": 2097152, 00:26:29.699 "uuid": "f615d78e-a47c-4e62-832d-c6c0913e5593", 00:26:29.699 "numa_id": 1, 00:26:29.699 "assigned_rate_limits": { 00:26:29.699 "rw_ios_per_sec": 0, 00:26:29.699 "rw_mbytes_per_sec": 0, 00:26:29.699 "r_mbytes_per_sec": 0, 00:26:29.699 "w_mbytes_per_sec": 0 00:26:29.699 }, 00:26:29.699 "claimed": false, 00:26:29.699 "zoned": false, 00:26:29.699 "supported_io_types": { 00:26:29.699 "read": true, 00:26:29.700 "write": true, 00:26:29.700 "unmap": false, 00:26:29.700 "flush": true, 00:26:29.700 "reset": true, 00:26:29.700 "nvme_admin": true, 00:26:29.700 "nvme_io": true, 00:26:29.700 "nvme_io_md": false, 00:26:29.700 "write_zeroes": true, 00:26:29.700 "zcopy": false, 00:26:29.700 "get_zone_info": false, 00:26:29.700 "zone_management": false, 00:26:29.700 "zone_append": false, 00:26:29.700 "compare": true, 00:26:29.700 "compare_and_write": true, 00:26:29.700 "abort": true, 00:26:29.700 "seek_hole": false, 00:26:29.700 "seek_data": false, 00:26:29.700 "copy": true, 00:26:29.700 "nvme_iov_md": false 00:26:29.700 }, 00:26:29.700 "memory_domains": [ 00:26:29.700 { 00:26:29.700 "dma_device_id": "system", 00:26:29.700 "dma_device_type": 1 00:26:29.700 } 00:26:29.700 ], 00:26:29.700 "driver_specific": { 00:26:29.700 "nvme": [ 00:26:29.700 { 00:26:29.700 "trid": { 00:26:29.700 "trtype": "TCP", 00:26:29.700 "adrfam": "IPv4", 00:26:29.700 "traddr": "10.0.0.2", 00:26:29.700 "trsvcid": "4420", 00:26:29.700 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:29.700 }, 00:26:29.700 "ctrlr_data": { 00:26:29.700 "cntlid": 1, 00:26:29.700 "vendor_id": "0x8086", 00:26:29.700 "model_number": "SPDK bdev Controller", 00:26:29.700 "serial_number": "00000000000000000000", 00:26:29.700 "firmware_revision": "25.01", 00:26:29.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.700 "oacs": { 00:26:29.700 "security": 0, 00:26:29.700 "format": 0, 00:26:29.700 "firmware": 0, 00:26:29.700 "ns_manage": 0 00:26:29.700 }, 00:26:29.700 "multi_ctrlr": true, 00:26:29.700 "ana_reporting": false 00:26:29.700 }, 00:26:29.700 "vs": { 00:26:29.700 "nvme_version": "1.3" 00:26:29.700 }, 00:26:29.700 "ns_data": { 00:26:29.700 "id": 1, 00:26:29.700 "can_share": true 00:26:29.700 } 00:26:29.700 } 00:26:29.700 ], 00:26:29.700 "mp_policy": "active_passive" 00:26:29.700 } 00:26:29.700 } 00:26:29.700 ] 00:26:29.700 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.700 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:29.700 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.700 13:58:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.700 [2024-12-05 13:58:11.997372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:29.700 [2024-12-05 13:58:11.997425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227cf80 (9): Bad file descriptor 00:26:29.700 [2024-12-05 13:58:12.129456] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:29.700 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.700 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:29.700 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.700 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.700 [ 00:26:29.700 { 00:26:29.700 "name": "nvme0n1", 00:26:29.700 "aliases": [ 00:26:29.700 "f615d78e-a47c-4e62-832d-c6c0913e5593" 00:26:29.700 ], 00:26:29.700 "product_name": "NVMe disk", 00:26:29.700 "block_size": 512, 00:26:29.700 "num_blocks": 2097152, 00:26:29.700 "uuid": "f615d78e-a47c-4e62-832d-c6c0913e5593", 00:26:29.700 "numa_id": 1, 00:26:29.700 "assigned_rate_limits": { 00:26:29.700 "rw_ios_per_sec": 0, 00:26:29.700 "rw_mbytes_per_sec": 0, 00:26:29.700 "r_mbytes_per_sec": 0, 00:26:29.700 "w_mbytes_per_sec": 0 00:26:29.700 }, 00:26:29.700 "claimed": false, 00:26:29.700 "zoned": false, 00:26:29.700 "supported_io_types": { 00:26:29.700 "read": true, 00:26:29.700 "write": true, 00:26:29.700 "unmap": false, 00:26:29.700 "flush": true, 00:26:29.700 "reset": true, 00:26:29.700 "nvme_admin": true, 00:26:29.700 "nvme_io": true, 00:26:29.700 "nvme_io_md": false, 00:26:29.700 "write_zeroes": true, 00:26:29.700 "zcopy": false, 00:26:29.700 "get_zone_info": false, 00:26:29.700 "zone_management": false, 00:26:29.700 "zone_append": false, 00:26:29.700 "compare": true, 00:26:29.700 "compare_and_write": true, 00:26:29.700 "abort": true, 00:26:29.700 "seek_hole": false, 00:26:29.700 "seek_data": false, 00:26:29.700 "copy": true, 00:26:29.700 "nvme_iov_md": false 00:26:29.700 }, 00:26:29.700 "memory_domains": [ 00:26:29.700 { 00:26:29.700 "dma_device_id": "system", 00:26:29.700 "dma_device_type": 1 00:26:29.700 } 00:26:29.700 ], 00:26:29.700 "driver_specific": { 00:26:29.700 "nvme": [ 00:26:29.700 { 00:26:29.700 "trid": { 00:26:29.700 "trtype": "TCP", 00:26:29.700 "adrfam": "IPv4", 00:26:29.700 "traddr": "10.0.0.2", 00:26:29.700 "trsvcid": "4420", 00:26:29.700 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:29.700 }, 00:26:29.700 "ctrlr_data": { 00:26:29.700 "cntlid": 2, 00:26:29.700 "vendor_id": "0x8086", 00:26:29.700 "model_number": "SPDK bdev Controller", 00:26:29.700 "serial_number": "00000000000000000000", 00:26:29.700 "firmware_revision": "25.01", 00:26:29.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.700 "oacs": { 00:26:29.700 "security": 0, 00:26:29.700 "format": 0, 00:26:29.700 "firmware": 0, 00:26:29.700 "ns_manage": 0 00:26:29.700 }, 00:26:29.700 "multi_ctrlr": true, 00:26:29.700 "ana_reporting": false 00:26:29.700 }, 00:26:29.700 "vs": { 00:26:29.700 "nvme_version": "1.3" 00:26:29.700 }, 00:26:29.700 "ns_data": { 00:26:29.700 "id": 1, 00:26:29.700 "can_share": true 00:26:29.700 } 00:26:29.700 } 00:26:29.700 ], 00:26:29.701 "mp_policy": "active_passive" 00:26:29.701 } 00:26:29.701 } 00:26:29.701 ] 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.KJZ66RMqBu 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.KJZ66RMqBu 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.KJZ66RMqBu 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.701 [2024-12-05 13:58:12.201979] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:29.701 [2024-12-05 13:58:12.202071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.701 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.701 [2024-12-05 13:58:12.218036] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:29.961 nvme0n1 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.961 [ 00:26:29.961 { 00:26:29.961 "name": "nvme0n1", 00:26:29.961 "aliases": [ 00:26:29.961 "f615d78e-a47c-4e62-832d-c6c0913e5593" 00:26:29.961 ], 00:26:29.961 "product_name": "NVMe disk", 00:26:29.961 "block_size": 512, 00:26:29.961 "num_blocks": 2097152, 00:26:29.961 "uuid": "f615d78e-a47c-4e62-832d-c6c0913e5593", 00:26:29.961 "numa_id": 1, 00:26:29.961 "assigned_rate_limits": { 00:26:29.961 "rw_ios_per_sec": 0, 00:26:29.961 "rw_mbytes_per_sec": 0, 00:26:29.961 "r_mbytes_per_sec": 0, 00:26:29.961 "w_mbytes_per_sec": 0 00:26:29.961 }, 00:26:29.961 "claimed": false, 00:26:29.961 "zoned": false, 00:26:29.961 "supported_io_types": { 00:26:29.961 "read": true, 00:26:29.961 "write": true, 00:26:29.961 "unmap": false, 00:26:29.961 "flush": true, 00:26:29.961 "reset": true, 00:26:29.961 "nvme_admin": true, 00:26:29.961 "nvme_io": true, 00:26:29.961 "nvme_io_md": false, 00:26:29.961 "write_zeroes": true, 00:26:29.961 "zcopy": false, 00:26:29.961 "get_zone_info": false, 00:26:29.961 "zone_management": false, 00:26:29.961 "zone_append": false, 00:26:29.961 "compare": true, 00:26:29.961 "compare_and_write": true, 00:26:29.961 "abort": true, 00:26:29.961 "seek_hole": false, 00:26:29.961 "seek_data": false, 00:26:29.961 "copy": true, 00:26:29.961 "nvme_iov_md": false 00:26:29.961 }, 00:26:29.961 "memory_domains": [ 00:26:29.961 { 00:26:29.961 "dma_device_id": "system", 00:26:29.961 "dma_device_type": 1 00:26:29.961 } 00:26:29.961 ], 00:26:29.961 "driver_specific": { 00:26:29.961 "nvme": [ 00:26:29.961 { 00:26:29.961 "trid": { 00:26:29.961 "trtype": "TCP", 00:26:29.961 "adrfam": "IPv4", 00:26:29.961 "traddr": "10.0.0.2", 00:26:29.961 "trsvcid": "4421", 00:26:29.961 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:29.961 }, 00:26:29.961 "ctrlr_data": { 00:26:29.961 "cntlid": 3, 00:26:29.961 "vendor_id": "0x8086", 00:26:29.961 "model_number": "SPDK bdev Controller", 00:26:29.961 "serial_number": "00000000000000000000", 00:26:29.961 "firmware_revision": "25.01", 00:26:29.961 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.961 "oacs": { 00:26:29.961 "security": 0, 00:26:29.961 "format": 0, 00:26:29.961 "firmware": 0, 00:26:29.961 "ns_manage": 0 00:26:29.961 }, 00:26:29.961 "multi_ctrlr": true, 00:26:29.961 "ana_reporting": false 00:26:29.961 }, 00:26:29.961 "vs": { 00:26:29.961 "nvme_version": "1.3" 00:26:29.961 }, 00:26:29.961 "ns_data": { 00:26:29.961 "id": 1, 00:26:29.961 "can_share": true 00:26:29.961 } 00:26:29.961 } 00:26:29.961 ], 00:26:29.961 "mp_policy": "active_passive" 00:26:29.961 } 00:26:29.961 } 00:26:29.961 ] 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.KJZ66RMqBu 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:29.961 rmmod nvme_tcp 00:26:29.961 rmmod nvme_fabrics 00:26:29.961 rmmod nvme_keyring 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:29.961 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 744922 ']' 00:26:29.962 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 744922 00:26:29.962 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 744922 ']' 00:26:29.962 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 744922 00:26:29.962 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:26:29.962 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:29.962 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 744922 00:26:29.962 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:29.962 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:29.962 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 744922' 00:26:29.962 killing process with pid 744922 00:26:29.962 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 744922 00:26:29.962 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 744922 00:26:30.222 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:30.222 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:30.222 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:30.222 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:26:30.222 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:26:30.222 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:30.222 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:26:30.222 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:30.222 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:30.222 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.222 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.222 13:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.130 13:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:32.130 00:26:32.130 real 0m9.423s 00:26:32.130 user 0m3.104s 00:26:32.130 sys 0m4.750s 00:26:32.130 13:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.130 13:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:32.130 ************************************ 00:26:32.130 END TEST nvmf_async_init 00:26:32.130 ************************************ 00:26:32.130 13:58:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:32.130 13:58:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:32.130 13:58:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:32.130 13:58:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.390 ************************************ 00:26:32.390 START TEST dma 00:26:32.390 ************************************ 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:32.390 * Looking for test storage... 00:26:32.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:32.390 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:32.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.391 --rc genhtml_branch_coverage=1 00:26:32.391 --rc genhtml_function_coverage=1 00:26:32.391 --rc genhtml_legend=1 00:26:32.391 --rc geninfo_all_blocks=1 00:26:32.391 --rc geninfo_unexecuted_blocks=1 00:26:32.391 00:26:32.391 ' 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:32.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.391 --rc genhtml_branch_coverage=1 00:26:32.391 --rc genhtml_function_coverage=1 00:26:32.391 --rc genhtml_legend=1 00:26:32.391 --rc geninfo_all_blocks=1 00:26:32.391 --rc geninfo_unexecuted_blocks=1 00:26:32.391 00:26:32.391 ' 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:32.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.391 --rc genhtml_branch_coverage=1 00:26:32.391 --rc genhtml_function_coverage=1 00:26:32.391 --rc genhtml_legend=1 00:26:32.391 --rc geninfo_all_blocks=1 00:26:32.391 --rc geninfo_unexecuted_blocks=1 00:26:32.391 00:26:32.391 ' 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:32.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.391 --rc genhtml_branch_coverage=1 00:26:32.391 --rc genhtml_function_coverage=1 00:26:32.391 --rc genhtml_legend=1 00:26:32.391 --rc geninfo_all_blocks=1 00:26:32.391 --rc geninfo_unexecuted_blocks=1 00:26:32.391 00:26:32.391 ' 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:32.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:26:32.391 00:26:32.391 real 0m0.208s 00:26:32.391 user 0m0.132s 00:26:32.391 sys 0m0.090s 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.391 13:58:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:32.391 ************************************ 00:26:32.391 END TEST dma 00:26:32.391 ************************************ 00:26:32.651 13:58:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:32.651 13:58:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:32.651 13:58:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:32.651 13:58:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.651 ************************************ 00:26:32.651 START TEST nvmf_identify 00:26:32.651 ************************************ 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:32.651 * Looking for test storage... 00:26:32.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.651 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:32.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.651 --rc genhtml_branch_coverage=1 00:26:32.651 --rc genhtml_function_coverage=1 00:26:32.651 --rc genhtml_legend=1 00:26:32.652 --rc geninfo_all_blocks=1 00:26:32.652 --rc geninfo_unexecuted_blocks=1 00:26:32.652 00:26:32.652 ' 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:32.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.652 --rc genhtml_branch_coverage=1 00:26:32.652 --rc genhtml_function_coverage=1 00:26:32.652 --rc genhtml_legend=1 00:26:32.652 --rc geninfo_all_blocks=1 00:26:32.652 --rc geninfo_unexecuted_blocks=1 00:26:32.652 00:26:32.652 ' 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:32.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.652 --rc genhtml_branch_coverage=1 00:26:32.652 --rc genhtml_function_coverage=1 00:26:32.652 --rc genhtml_legend=1 00:26:32.652 --rc geninfo_all_blocks=1 00:26:32.652 --rc geninfo_unexecuted_blocks=1 00:26:32.652 00:26:32.652 ' 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:32.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.652 --rc genhtml_branch_coverage=1 00:26:32.652 --rc genhtml_function_coverage=1 00:26:32.652 --rc genhtml_legend=1 00:26:32.652 --rc geninfo_all_blocks=1 00:26:32.652 --rc geninfo_unexecuted_blocks=1 00:26:32.652 00:26:32.652 ' 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:32.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:32.652 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:32.910 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:26:32.910 13:58:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:39.486 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.486 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:39.487 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:39.487 Found net devices under 0000:86:00.0: cvl_0_0 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:39.487 Found net devices under 0000:86:00.1: cvl_0_1 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.487 13:58:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:39.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:26:39.487 00:26:39.487 --- 10.0.0.2 ping statistics --- 00:26:39.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.487 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:26:39.487 00:26:39.487 --- 10.0.0.1 ping statistics --- 00:26:39.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.487 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=748670 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:39.487 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 748670 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 748670 ']' 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:39.488 [2024-12-05 13:58:21.240242] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:26:39.488 [2024-12-05 13:58:21.240283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.488 [2024-12-05 13:58:21.322278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:39.488 [2024-12-05 13:58:21.366198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.488 [2024-12-05 13:58:21.366237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.488 [2024-12-05 13:58:21.366244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.488 [2024-12-05 13:58:21.366251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.488 [2024-12-05 13:58:21.366256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.488 [2024-12-05 13:58:21.367676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.488 [2024-12-05 13:58:21.367787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.488 [2024-12-05 13:58:21.367801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.488 [2024-12-05 13:58:21.367807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:39.488 [2024-12-05 13:58:21.478168] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:39.488 Malloc0 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:39.488 [2024-12-05 13:58:21.574601] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:39.488 [ 00:26:39.488 { 00:26:39.488 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:39.488 "subtype": "Discovery", 00:26:39.488 "listen_addresses": [ 00:26:39.488 { 00:26:39.488 "trtype": "TCP", 00:26:39.488 "adrfam": "IPv4", 00:26:39.488 "traddr": "10.0.0.2", 00:26:39.488 "trsvcid": "4420" 00:26:39.488 } 00:26:39.488 ], 00:26:39.488 "allow_any_host": true, 00:26:39.488 "hosts": [] 00:26:39.488 }, 00:26:39.488 { 00:26:39.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:39.488 "subtype": "NVMe", 00:26:39.488 "listen_addresses": [ 00:26:39.488 { 00:26:39.488 "trtype": "TCP", 00:26:39.488 "adrfam": "IPv4", 00:26:39.488 "traddr": "10.0.0.2", 00:26:39.488 "trsvcid": "4420" 00:26:39.488 } 00:26:39.488 ], 00:26:39.488 "allow_any_host": true, 00:26:39.488 "hosts": [], 00:26:39.488 "serial_number": "SPDK00000000000001", 00:26:39.488 "model_number": "SPDK bdev Controller", 00:26:39.488 "max_namespaces": 32, 00:26:39.488 "min_cntlid": 1, 00:26:39.488 "max_cntlid": 65519, 00:26:39.488 "namespaces": [ 00:26:39.488 { 00:26:39.488 "nsid": 1, 00:26:39.488 "bdev_name": "Malloc0", 00:26:39.488 "name": "Malloc0", 00:26:39.488 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:39.488 "eui64": "ABCDEF0123456789", 00:26:39.488 "uuid": "d848513d-0b10-47f7-8fd5-7a85da7b5d42" 00:26:39.488 } 00:26:39.488 ] 00:26:39.488 } 00:26:39.488 ] 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.488 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:39.488 [2024-12-05 13:58:21.628294] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:26:39.488 [2024-12-05 13:58:21.628342] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748768 ] 00:26:39.488 [2024-12-05 13:58:21.666252] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:26:39.488 [2024-12-05 13:58:21.666296] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:39.488 [2024-12-05 13:58:21.666301] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:39.488 [2024-12-05 13:58:21.666315] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:39.488 [2024-12-05 13:58:21.666324] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:39.488 [2024-12-05 13:58:21.670667] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:26:39.489 [2024-12-05 13:58:21.670706] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20d3690 0 00:26:39.489 [2024-12-05 13:58:21.678381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:39.489 [2024-12-05 13:58:21.678394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:39.489 [2024-12-05 13:58:21.678399] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:39.489 [2024-12-05 13:58:21.678402] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:39.489 [2024-12-05 13:58:21.678436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.678441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.678445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d3690) 00:26:39.489 [2024-12-05 13:58:21.678456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:39.489 [2024-12-05 13:58:21.678472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135100, cid 0, qid 0 00:26:39.489 [2024-12-05 13:58:21.686377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.489 [2024-12-05 13:58:21.686385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.489 [2024-12-05 13:58:21.686388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.686392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135100) on tqpair=0x20d3690 00:26:39.489 [2024-12-05 13:58:21.686401] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:39.489 [2024-12-05 13:58:21.686408] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:26:39.489 [2024-12-05 13:58:21.686416] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:26:39.489 [2024-12-05 13:58:21.686430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.686433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.686436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d3690) 00:26:39.489 [2024-12-05 13:58:21.686443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.489 [2024-12-05 13:58:21.686456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135100, cid 0, qid 0 00:26:39.489 [2024-12-05 13:58:21.686618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.489 [2024-12-05 13:58:21.686624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.489 [2024-12-05 13:58:21.686627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.686631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135100) on tqpair=0x20d3690 00:26:39.489 [2024-12-05 13:58:21.686637] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:26:39.489 [2024-12-05 13:58:21.686644] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:26:39.489 [2024-12-05 13:58:21.686650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.686654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.686657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d3690) 00:26:39.489 [2024-12-05 13:58:21.686662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.489 [2024-12-05 13:58:21.686673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135100, cid 0, qid 0 00:26:39.489 [2024-12-05 13:58:21.686732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.489 [2024-12-05 13:58:21.686738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.489 [2024-12-05 13:58:21.686741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.686744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135100) on tqpair=0x20d3690 00:26:39.489 [2024-12-05 13:58:21.686749] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:26:39.489 [2024-12-05 13:58:21.686756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:39.489 [2024-12-05 13:58:21.686762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.686765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.686768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d3690) 00:26:39.489 [2024-12-05 13:58:21.686774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.489 [2024-12-05 13:58:21.686783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135100, cid 0, qid 0 00:26:39.489 [2024-12-05 13:58:21.686844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.489 [2024-12-05 13:58:21.686850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.489 [2024-12-05 13:58:21.686853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.686856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135100) on tqpair=0x20d3690 00:26:39.489 [2024-12-05 13:58:21.686861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:39.489 [2024-12-05 13:58:21.686869] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.686875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.686878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d3690) 00:26:39.489 [2024-12-05 13:58:21.686883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.489 [2024-12-05 13:58:21.686893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135100, cid 0, qid 0 00:26:39.489 [2024-12-05 13:58:21.686953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.489 [2024-12-05 13:58:21.686959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.489 [2024-12-05 13:58:21.686962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.686965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135100) on tqpair=0x20d3690 00:26:39.489 [2024-12-05 13:58:21.686969] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:39.489 [2024-12-05 13:58:21.686974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:39.489 [2024-12-05 13:58:21.686980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:39.489 [2024-12-05 13:58:21.687089] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:26:39.489 [2024-12-05 13:58:21.687094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:39.489 [2024-12-05 13:58:21.687101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.687104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.687107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d3690) 00:26:39.489 [2024-12-05 13:58:21.687113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.489 [2024-12-05 13:58:21.687123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135100, cid 0, qid 0 00:26:39.489 [2024-12-05 13:58:21.687182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.489 [2024-12-05 13:58:21.687188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.489 [2024-12-05 13:58:21.687191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.687194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135100) on tqpair=0x20d3690 00:26:39.489 [2024-12-05 13:58:21.687198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:39.489 [2024-12-05 13:58:21.687206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.687210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.687213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d3690) 00:26:39.489 [2024-12-05 13:58:21.687218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.489 [2024-12-05 13:58:21.687227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135100, cid 0, qid 0 00:26:39.489 [2024-12-05 13:58:21.687293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.489 [2024-12-05 13:58:21.687299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.489 [2024-12-05 13:58:21.687301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.489 [2024-12-05 13:58:21.687305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135100) on tqpair=0x20d3690 00:26:39.490 [2024-12-05 13:58:21.687308] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:39.490 [2024-12-05 13:58:21.687315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:39.490 [2024-12-05 13:58:21.687322] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:26:39.490 [2024-12-05 13:58:21.687332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:39.490 [2024-12-05 13:58:21.687340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d3690) 00:26:39.490 [2024-12-05 13:58:21.687349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.490 [2024-12-05 13:58:21.687359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135100, cid 0, qid 0 00:26:39.490 [2024-12-05 13:58:21.687479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.490 [2024-12-05 13:58:21.687485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.490 [2024-12-05 13:58:21.687488] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687491] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20d3690): datao=0, datal=4096, cccid=0 00:26:39.490 [2024-12-05 13:58:21.687495] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2135100) on tqpair(0x20d3690): expected_datao=0, payload_size=4096 00:26:39.490 [2024-12-05 13:58:21.687499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687505] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687509] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.490 [2024-12-05 13:58:21.687526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.490 [2024-12-05 13:58:21.687529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135100) on tqpair=0x20d3690 00:26:39.490 [2024-12-05 13:58:21.687539] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:26:39.490 [2024-12-05 13:58:21.687543] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:26:39.490 [2024-12-05 13:58:21.687547] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:26:39.490 [2024-12-05 13:58:21.687551] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:26:39.490 [2024-12-05 13:58:21.687555] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:26:39.490 [2024-12-05 13:58:21.687559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:26:39.490 [2024-12-05 13:58:21.687567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:39.490 [2024-12-05 13:58:21.687573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d3690) 00:26:39.490 [2024-12-05 13:58:21.687586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:39.490 [2024-12-05 13:58:21.687596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135100, cid 0, qid 0 00:26:39.490 [2024-12-05 13:58:21.687659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.490 [2024-12-05 13:58:21.687665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.490 [2024-12-05 13:58:21.687668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135100) on tqpair=0x20d3690 00:26:39.490 [2024-12-05 13:58:21.687677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20d3690) 00:26:39.490 [2024-12-05 13:58:21.687688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.490 [2024-12-05 13:58:21.687694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20d3690) 00:26:39.490 [2024-12-05 13:58:21.687705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.490 [2024-12-05 13:58:21.687710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687713] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20d3690) 00:26:39.490 [2024-12-05 13:58:21.687721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.490 [2024-12-05 13:58:21.687726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.490 [2024-12-05 13:58:21.687737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.490 [2024-12-05 13:58:21.687741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:39.490 [2024-12-05 13:58:21.687752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:39.490 [2024-12-05 13:58:21.687757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20d3690) 00:26:39.490 [2024-12-05 13:58:21.687766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.490 [2024-12-05 13:58:21.687777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135100, cid 0, qid 0 00:26:39.490 [2024-12-05 13:58:21.687782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135280, cid 1, qid 0 00:26:39.490 [2024-12-05 13:58:21.687786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135400, cid 2, qid 0 00:26:39.490 [2024-12-05 13:58:21.687790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.490 [2024-12-05 13:58:21.687794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135700, cid 4, qid 0 00:26:39.490 [2024-12-05 13:58:21.687883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.490 [2024-12-05 13:58:21.687890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.490 [2024-12-05 13:58:21.687893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135700) on tqpair=0x20d3690 00:26:39.490 [2024-12-05 13:58:21.687903] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:26:39.490 [2024-12-05 13:58:21.687907] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:26:39.490 [2024-12-05 13:58:21.687916] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.687919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20d3690) 00:26:39.490 [2024-12-05 13:58:21.687925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.490 [2024-12-05 13:58:21.687935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135700, cid 4, qid 0 00:26:39.490 [2024-12-05 13:58:21.688018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.490 [2024-12-05 13:58:21.688024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.490 [2024-12-05 13:58:21.688027] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.490 [2024-12-05 13:58:21.688030] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20d3690): datao=0, datal=4096, cccid=4 00:26:39.490 [2024-12-05 13:58:21.688034] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2135700) on tqpair(0x20d3690): expected_datao=0, payload_size=4096 00:26:39.491 [2024-12-05 13:58:21.688038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.688049] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.688053] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.729470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.491 [2024-12-05 13:58:21.729481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.491 [2024-12-05 13:58:21.729485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.729488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135700) on tqpair=0x20d3690 00:26:39.491 [2024-12-05 13:58:21.729500] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:26:39.491 [2024-12-05 13:58:21.729523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.729528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20d3690) 00:26:39.491 [2024-12-05 13:58:21.729534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.491 [2024-12-05 13:58:21.729540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.729544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.729547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20d3690) 00:26:39.491 [2024-12-05 13:58:21.729552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.491 [2024-12-05 13:58:21.729568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135700, cid 4, qid 0 00:26:39.491 [2024-12-05 13:58:21.729573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135880, cid 5, qid 0 00:26:39.491 [2024-12-05 13:58:21.729668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.491 [2024-12-05 13:58:21.729674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.491 [2024-12-05 13:58:21.729677] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.729680] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20d3690): datao=0, datal=1024, cccid=4 00:26:39.491 [2024-12-05 13:58:21.729685] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2135700) on tqpair(0x20d3690): expected_datao=0, payload_size=1024 00:26:39.491 [2024-12-05 13:58:21.729689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.729696] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.729700] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.729705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.491 [2024-12-05 13:58:21.729710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.491 [2024-12-05 13:58:21.729713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.729716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135880) on tqpair=0x20d3690 00:26:39.491 [2024-12-05 13:58:21.774376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.491 [2024-12-05 13:58:21.774384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.491 [2024-12-05 13:58:21.774387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.774391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135700) on tqpair=0x20d3690 00:26:39.491 [2024-12-05 13:58:21.774400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.774404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20d3690) 00:26:39.491 [2024-12-05 13:58:21.774410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.491 [2024-12-05 13:58:21.774426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135700, cid 4, qid 0 00:26:39.491 [2024-12-05 13:58:21.774579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.491 [2024-12-05 13:58:21.774585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.491 [2024-12-05 13:58:21.774588] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.774591] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20d3690): datao=0, datal=3072, cccid=4 00:26:39.491 [2024-12-05 13:58:21.774595] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2135700) on tqpair(0x20d3690): expected_datao=0, payload_size=3072 00:26:39.491 [2024-12-05 13:58:21.774599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.774605] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.774608] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.774660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.491 [2024-12-05 13:58:21.774666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.491 [2024-12-05 13:58:21.774669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.774672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135700) on tqpair=0x20d3690 00:26:39.491 [2024-12-05 13:58:21.774680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.774683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20d3690) 00:26:39.491 [2024-12-05 13:58:21.774689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.491 [2024-12-05 13:58:21.774702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135700, cid 4, qid 0 00:26:39.491 [2024-12-05 13:58:21.774769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.491 [2024-12-05 13:58:21.774775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.491 [2024-12-05 13:58:21.774778] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.774781] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20d3690): datao=0, datal=8, cccid=4 00:26:39.491 [2024-12-05 13:58:21.774784] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2135700) on tqpair(0x20d3690): expected_datao=0, payload_size=8 00:26:39.491 [2024-12-05 13:58:21.774788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.774794] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.774800] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.816516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.491 [2024-12-05 13:58:21.816526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.491 [2024-12-05 13:58:21.816529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.491 [2024-12-05 13:58:21.816533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135700) on tqpair=0x20d3690 00:26:39.491 ===================================================== 00:26:39.491 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:39.491 ===================================================== 00:26:39.491 Controller Capabilities/Features 00:26:39.491 ================================ 00:26:39.491 Vendor ID: 0000 00:26:39.491 Subsystem Vendor ID: 0000 00:26:39.491 Serial Number: .................... 00:26:39.491 Model Number: ........................................ 00:26:39.491 Firmware Version: 25.01 00:26:39.491 Recommended Arb Burst: 0 00:26:39.491 IEEE OUI Identifier: 00 00 00 00:26:39.491 Multi-path I/O 00:26:39.491 May have multiple subsystem ports: No 00:26:39.491 May have multiple controllers: No 00:26:39.491 Associated with SR-IOV VF: No 00:26:39.491 Max Data Transfer Size: 131072 00:26:39.491 Max Number of Namespaces: 0 00:26:39.491 Max Number of I/O Queues: 1024 00:26:39.491 NVMe Specification Version (VS): 1.3 00:26:39.491 NVMe Specification Version (Identify): 1.3 00:26:39.492 Maximum Queue Entries: 128 00:26:39.492 Contiguous Queues Required: Yes 00:26:39.492 Arbitration Mechanisms Supported 00:26:39.492 Weighted Round Robin: Not Supported 00:26:39.492 Vendor Specific: Not Supported 00:26:39.492 Reset Timeout: 15000 ms 00:26:39.492 Doorbell Stride: 4 bytes 00:26:39.492 NVM Subsystem Reset: Not Supported 00:26:39.492 Command Sets Supported 00:26:39.492 NVM Command Set: Supported 00:26:39.492 Boot Partition: Not Supported 00:26:39.492 Memory Page Size Minimum: 4096 bytes 00:26:39.492 Memory Page Size Maximum: 4096 bytes 00:26:39.492 Persistent Memory Region: Not Supported 00:26:39.492 Optional Asynchronous Events Supported 00:26:39.492 Namespace Attribute Notices: Not Supported 00:26:39.492 Firmware Activation Notices: Not Supported 00:26:39.492 ANA Change Notices: Not Supported 00:26:39.492 PLE Aggregate Log Change Notices: Not Supported 00:26:39.492 LBA Status Info Alert Notices: Not Supported 00:26:39.492 EGE Aggregate Log Change Notices: Not Supported 00:26:39.492 Normal NVM Subsystem Shutdown event: Not Supported 00:26:39.492 Zone Descriptor Change Notices: Not Supported 00:26:39.492 Discovery Log Change Notices: Supported 00:26:39.492 Controller Attributes 00:26:39.492 128-bit Host Identifier: Not Supported 00:26:39.492 Non-Operational Permissive Mode: Not Supported 00:26:39.492 NVM Sets: Not Supported 00:26:39.492 Read Recovery Levels: Not Supported 00:26:39.492 Endurance Groups: Not Supported 00:26:39.492 Predictable Latency Mode: Not Supported 00:26:39.492 Traffic Based Keep ALive: Not Supported 00:26:39.492 Namespace Granularity: Not Supported 00:26:39.492 SQ Associations: Not Supported 00:26:39.492 UUID List: Not Supported 00:26:39.492 Multi-Domain Subsystem: Not Supported 00:26:39.492 Fixed Capacity Management: Not Supported 00:26:39.492 Variable Capacity Management: Not Supported 00:26:39.492 Delete Endurance Group: Not Supported 00:26:39.492 Delete NVM Set: Not Supported 00:26:39.492 Extended LBA Formats Supported: Not Supported 00:26:39.492 Flexible Data Placement Supported: Not Supported 00:26:39.492 00:26:39.492 Controller Memory Buffer Support 00:26:39.492 ================================ 00:26:39.492 Supported: No 00:26:39.492 00:26:39.492 Persistent Memory Region Support 00:26:39.492 ================================ 00:26:39.492 Supported: No 00:26:39.492 00:26:39.492 Admin Command Set Attributes 00:26:39.492 ============================ 00:26:39.492 Security Send/Receive: Not Supported 00:26:39.492 Format NVM: Not Supported 00:26:39.492 Firmware Activate/Download: Not Supported 00:26:39.492 Namespace Management: Not Supported 00:26:39.492 Device Self-Test: Not Supported 00:26:39.492 Directives: Not Supported 00:26:39.492 NVMe-MI: Not Supported 00:26:39.492 Virtualization Management: Not Supported 00:26:39.492 Doorbell Buffer Config: Not Supported 00:26:39.492 Get LBA Status Capability: Not Supported 00:26:39.492 Command & Feature Lockdown Capability: Not Supported 00:26:39.492 Abort Command Limit: 1 00:26:39.492 Async Event Request Limit: 4 00:26:39.492 Number of Firmware Slots: N/A 00:26:39.492 Firmware Slot 1 Read-Only: N/A 00:26:39.492 Firmware Activation Without Reset: N/A 00:26:39.492 Multiple Update Detection Support: N/A 00:26:39.492 Firmware Update Granularity: No Information Provided 00:26:39.492 Per-Namespace SMART Log: No 00:26:39.492 Asymmetric Namespace Access Log Page: Not Supported 00:26:39.492 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:39.492 Command Effects Log Page: Not Supported 00:26:39.492 Get Log Page Extended Data: Supported 00:26:39.492 Telemetry Log Pages: Not Supported 00:26:39.492 Persistent Event Log Pages: Not Supported 00:26:39.492 Supported Log Pages Log Page: May Support 00:26:39.492 Commands Supported & Effects Log Page: Not Supported 00:26:39.492 Feature Identifiers & Effects Log Page:May Support 00:26:39.492 NVMe-MI Commands & Effects Log Page: May Support 00:26:39.492 Data Area 4 for Telemetry Log: Not Supported 00:26:39.492 Error Log Page Entries Supported: 128 00:26:39.492 Keep Alive: Not Supported 00:26:39.492 00:26:39.492 NVM Command Set Attributes 00:26:39.492 ========================== 00:26:39.492 Submission Queue Entry Size 00:26:39.492 Max: 1 00:26:39.492 Min: 1 00:26:39.492 Completion Queue Entry Size 00:26:39.492 Max: 1 00:26:39.492 Min: 1 00:26:39.492 Number of Namespaces: 0 00:26:39.492 Compare Command: Not Supported 00:26:39.492 Write Uncorrectable Command: Not Supported 00:26:39.492 Dataset Management Command: Not Supported 00:26:39.492 Write Zeroes Command: Not Supported 00:26:39.492 Set Features Save Field: Not Supported 00:26:39.492 Reservations: Not Supported 00:26:39.492 Timestamp: Not Supported 00:26:39.492 Copy: Not Supported 00:26:39.492 Volatile Write Cache: Not Present 00:26:39.492 Atomic Write Unit (Normal): 1 00:26:39.492 Atomic Write Unit (PFail): 1 00:26:39.492 Atomic Compare & Write Unit: 1 00:26:39.492 Fused Compare & Write: Supported 00:26:39.492 Scatter-Gather List 00:26:39.492 SGL Command Set: Supported 00:26:39.492 SGL Keyed: Supported 00:26:39.492 SGL Bit Bucket Descriptor: Not Supported 00:26:39.492 SGL Metadata Pointer: Not Supported 00:26:39.492 Oversized SGL: Not Supported 00:26:39.492 SGL Metadata Address: Not Supported 00:26:39.492 SGL Offset: Supported 00:26:39.492 Transport SGL Data Block: Not Supported 00:26:39.492 Replay Protected Memory Block: Not Supported 00:26:39.492 00:26:39.492 Firmware Slot Information 00:26:39.492 ========================= 00:26:39.492 Active slot: 0 00:26:39.492 00:26:39.492 00:26:39.492 Error Log 00:26:39.492 ========= 00:26:39.492 00:26:39.492 Active Namespaces 00:26:39.492 ================= 00:26:39.493 Discovery Log Page 00:26:39.493 ================== 00:26:39.493 Generation Counter: 2 00:26:39.493 Number of Records: 2 00:26:39.493 Record Format: 0 00:26:39.493 00:26:39.493 Discovery Log Entry 0 00:26:39.493 ---------------------- 00:26:39.493 Transport Type: 3 (TCP) 00:26:39.493 Address Family: 1 (IPv4) 00:26:39.493 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:39.493 Entry Flags: 00:26:39.493 Duplicate Returned Information: 1 00:26:39.493 Explicit Persistent Connection Support for Discovery: 1 00:26:39.493 Transport Requirements: 00:26:39.493 Secure Channel: Not Required 00:26:39.493 Port ID: 0 (0x0000) 00:26:39.493 Controller ID: 65535 (0xffff) 00:26:39.493 Admin Max SQ Size: 128 00:26:39.493 Transport Service Identifier: 4420 00:26:39.493 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:39.493 Transport Address: 10.0.0.2 00:26:39.493 Discovery Log Entry 1 00:26:39.493 ---------------------- 00:26:39.493 Transport Type: 3 (TCP) 00:26:39.493 Address Family: 1 (IPv4) 00:26:39.493 Subsystem Type: 2 (NVM Subsystem) 00:26:39.493 Entry Flags: 00:26:39.493 Duplicate Returned Information: 0 00:26:39.493 Explicit Persistent Connection Support for Discovery: 0 00:26:39.493 Transport Requirements: 00:26:39.493 Secure Channel: Not Required 00:26:39.493 Port ID: 0 (0x0000) 00:26:39.493 Controller ID: 65535 (0xffff) 00:26:39.493 Admin Max SQ Size: 128 00:26:39.493 Transport Service Identifier: 4420 00:26:39.493 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:39.493 Transport Address: 10.0.0.2 [2024-12-05 13:58:21.816612] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:26:39.493 [2024-12-05 13:58:21.816623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135100) on tqpair=0x20d3690 00:26:39.493 [2024-12-05 13:58:21.816629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.493 [2024-12-05 13:58:21.816633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135280) on tqpair=0x20d3690 00:26:39.493 [2024-12-05 13:58:21.816637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.493 [2024-12-05 13:58:21.816642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135400) on tqpair=0x20d3690 00:26:39.493 [2024-12-05 13:58:21.816646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.493 [2024-12-05 13:58:21.816650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.493 [2024-12-05 13:58:21.816654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.493 [2024-12-05 13:58:21.816661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.816665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.816668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.493 [2024-12-05 13:58:21.816675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.493 [2024-12-05 13:58:21.816688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.493 [2024-12-05 13:58:21.816749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.493 [2024-12-05 13:58:21.816755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.493 [2024-12-05 13:58:21.816758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.816761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.493 [2024-12-05 13:58:21.816767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.816770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.816773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.493 [2024-12-05 13:58:21.816779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.493 [2024-12-05 13:58:21.816791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.493 [2024-12-05 13:58:21.816862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.493 [2024-12-05 13:58:21.816867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.493 [2024-12-05 13:58:21.816870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.816873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.493 [2024-12-05 13:58:21.816878] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:26:39.493 [2024-12-05 13:58:21.816882] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:26:39.493 [2024-12-05 13:58:21.816892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.816895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.816899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.493 [2024-12-05 13:58:21.816904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.493 [2024-12-05 13:58:21.816914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.493 [2024-12-05 13:58:21.816978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.493 [2024-12-05 13:58:21.816984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.493 [2024-12-05 13:58:21.816987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.816990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.493 [2024-12-05 13:58:21.816998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.817002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.817005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.493 [2024-12-05 13:58:21.817010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.493 [2024-12-05 13:58:21.817020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.493 [2024-12-05 13:58:21.817080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.493 [2024-12-05 13:58:21.817085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.493 [2024-12-05 13:58:21.817088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.817091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.493 [2024-12-05 13:58:21.817099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.817103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.817106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.493 [2024-12-05 13:58:21.817112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.493 [2024-12-05 13:58:21.817121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.493 [2024-12-05 13:58:21.817184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.493 [2024-12-05 13:58:21.817190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.493 [2024-12-05 13:58:21.817192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.817196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.493 [2024-12-05 13:58:21.817205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.817209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.493 [2024-12-05 13:58:21.817212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.494 [2024-12-05 13:58:21.817217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.494 [2024-12-05 13:58:21.817227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.494 [2024-12-05 13:58:21.817284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.494 [2024-12-05 13:58:21.817290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.494 [2024-12-05 13:58:21.817293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.494 [2024-12-05 13:58:21.817304] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.494 [2024-12-05 13:58:21.817318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.494 [2024-12-05 13:58:21.817327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.494 [2024-12-05 13:58:21.817402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.494 [2024-12-05 13:58:21.817408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.494 [2024-12-05 13:58:21.817411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.494 [2024-12-05 13:58:21.817422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.494 [2024-12-05 13:58:21.817434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.494 [2024-12-05 13:58:21.817444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.494 [2024-12-05 13:58:21.817500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.494 [2024-12-05 13:58:21.817505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.494 [2024-12-05 13:58:21.817508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817511] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.494 [2024-12-05 13:58:21.817519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.494 [2024-12-05 13:58:21.817531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.494 [2024-12-05 13:58:21.817540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.494 [2024-12-05 13:58:21.817597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.494 [2024-12-05 13:58:21.817603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.494 [2024-12-05 13:58:21.817606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.494 [2024-12-05 13:58:21.817617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.494 [2024-12-05 13:58:21.817629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.494 [2024-12-05 13:58:21.817638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.494 [2024-12-05 13:58:21.817702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.494 [2024-12-05 13:58:21.817707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.494 [2024-12-05 13:58:21.817710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.494 [2024-12-05 13:58:21.817721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.494 [2024-12-05 13:58:21.817735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.494 [2024-12-05 13:58:21.817744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.494 [2024-12-05 13:58:21.817818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.494 [2024-12-05 13:58:21.817823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.494 [2024-12-05 13:58:21.817826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.494 [2024-12-05 13:58:21.817838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.494 [2024-12-05 13:58:21.817849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.494 [2024-12-05 13:58:21.817859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.494 [2024-12-05 13:58:21.817916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.494 [2024-12-05 13:58:21.817921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.494 [2024-12-05 13:58:21.817924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.494 [2024-12-05 13:58:21.817935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.817942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.494 [2024-12-05 13:58:21.817947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.494 [2024-12-05 13:58:21.817957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.494 [2024-12-05 13:58:21.818019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.494 [2024-12-05 13:58:21.818025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.494 [2024-12-05 13:58:21.818027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.818031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.494 [2024-12-05 13:58:21.818039] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.818042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.818045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.494 [2024-12-05 13:58:21.818051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.494 [2024-12-05 13:58:21.818059] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.494 [2024-12-05 13:58:21.818116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.494 [2024-12-05 13:58:21.818121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.494 [2024-12-05 13:58:21.818124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.818127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.494 [2024-12-05 13:58:21.818135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.818139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.494 [2024-12-05 13:58:21.818142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.495 [2024-12-05 13:58:21.818150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-12-05 13:58:21.818159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.495 [2024-12-05 13:58:21.818216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.495 [2024-12-05 13:58:21.818222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.495 [2024-12-05 13:58:21.818225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.818228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.495 [2024-12-05 13:58:21.818236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.818239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.818242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.495 [2024-12-05 13:58:21.818248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-12-05 13:58:21.818257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.495 [2024-12-05 13:58:21.818331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.495 [2024-12-05 13:58:21.818337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.495 [2024-12-05 13:58:21.818340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.818343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.495 [2024-12-05 13:58:21.818351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.818355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.818358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20d3690) 00:26:39.495 [2024-12-05 13:58:21.818363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-12-05 13:58:21.822380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2135580, cid 3, qid 0 00:26:39.495 [2024-12-05 13:58:21.822505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.495 [2024-12-05 13:58:21.822511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.495 [2024-12-05 13:58:21.822513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.822517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2135580) on tqpair=0x20d3690 00:26:39.495 [2024-12-05 13:58:21.822525] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:26:39.495 00:26:39.495 13:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:39.495 [2024-12-05 13:58:21.861294] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:26:39.495 [2024-12-05 13:58:21.861327] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748774 ] 00:26:39.495 [2024-12-05 13:58:21.900565] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:26:39.495 [2024-12-05 13:58:21.900608] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:39.495 [2024-12-05 13:58:21.900613] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:39.495 [2024-12-05 13:58:21.900630] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:39.495 [2024-12-05 13:58:21.900638] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:39.495 [2024-12-05 13:58:21.904572] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:26:39.495 [2024-12-05 13:58:21.904601] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a45690 0 00:26:39.495 [2024-12-05 13:58:21.911376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:39.495 [2024-12-05 13:58:21.911388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:39.495 [2024-12-05 13:58:21.911391] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:39.495 [2024-12-05 13:58:21.911394] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:39.495 [2024-12-05 13:58:21.911424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.911429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.911432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a45690) 00:26:39.495 [2024-12-05 13:58:21.911442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:39.495 [2024-12-05 13:58:21.911458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7100, cid 0, qid 0 00:26:39.495 [2024-12-05 13:58:21.918375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.495 [2024-12-05 13:58:21.918382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.495 [2024-12-05 13:58:21.918385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.918389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7100) on tqpair=0x1a45690 00:26:39.495 [2024-12-05 13:58:21.918397] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:39.495 [2024-12-05 13:58:21.918403] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:26:39.495 [2024-12-05 13:58:21.918407] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:26:39.495 [2024-12-05 13:58:21.918420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.918424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.918427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a45690) 00:26:39.495 [2024-12-05 13:58:21.918434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-12-05 13:58:21.918446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7100, cid 0, qid 0 00:26:39.495 [2024-12-05 13:58:21.918581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.495 [2024-12-05 13:58:21.918587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.495 [2024-12-05 13:58:21.918590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.918593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7100) on tqpair=0x1a45690 00:26:39.495 [2024-12-05 13:58:21.918600] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:26:39.495 [2024-12-05 13:58:21.918607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:26:39.495 [2024-12-05 13:58:21.918613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.918617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.918620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a45690) 00:26:39.495 [2024-12-05 13:58:21.918625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-12-05 13:58:21.918638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7100, cid 0, qid 0 00:26:39.495 [2024-12-05 13:58:21.918698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.495 [2024-12-05 13:58:21.918704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.495 [2024-12-05 13:58:21.918707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.918710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7100) on tqpair=0x1a45690 00:26:39.495 [2024-12-05 13:58:21.918714] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:26:39.495 [2024-12-05 13:58:21.918721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:39.495 [2024-12-05 13:58:21.918727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.918730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.495 [2024-12-05 13:58:21.918734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a45690) 00:26:39.495 [2024-12-05 13:58:21.918739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.495 [2024-12-05 13:58:21.918749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7100, cid 0, qid 0 00:26:39.495 [2024-12-05 13:58:21.918807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.495 [2024-12-05 13:58:21.918813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.496 [2024-12-05 13:58:21.918816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.918819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7100) on tqpair=0x1a45690 00:26:39.496 [2024-12-05 13:58:21.918823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:39.496 [2024-12-05 13:58:21.918832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.918835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.918839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a45690) 00:26:39.496 [2024-12-05 13:58:21.918844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-12-05 13:58:21.918854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7100, cid 0, qid 0 00:26:39.496 [2024-12-05 13:58:21.918914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.496 [2024-12-05 13:58:21.918919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.496 [2024-12-05 13:58:21.918922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.918925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7100) on tqpair=0x1a45690 00:26:39.496 [2024-12-05 13:58:21.918929] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:39.496 [2024-12-05 13:58:21.918934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:39.496 [2024-12-05 13:58:21.918940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:39.496 [2024-12-05 13:58:21.919048] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:26:39.496 [2024-12-05 13:58:21.919052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:39.496 [2024-12-05 13:58:21.919059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a45690) 00:26:39.496 [2024-12-05 13:58:21.919074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-12-05 13:58:21.919085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7100, cid 0, qid 0 00:26:39.496 [2024-12-05 13:58:21.919144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.496 [2024-12-05 13:58:21.919150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.496 [2024-12-05 13:58:21.919153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7100) on tqpair=0x1a45690 00:26:39.496 [2024-12-05 13:58:21.919160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:39.496 [2024-12-05 13:58:21.919168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a45690) 00:26:39.496 [2024-12-05 13:58:21.919180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-12-05 13:58:21.919189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7100, cid 0, qid 0 00:26:39.496 [2024-12-05 13:58:21.919263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.496 [2024-12-05 13:58:21.919269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.496 [2024-12-05 13:58:21.919271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7100) on tqpair=0x1a45690 00:26:39.496 [2024-12-05 13:58:21.919279] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:39.496 [2024-12-05 13:58:21.919283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:39.496 [2024-12-05 13:58:21.919289] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:26:39.496 [2024-12-05 13:58:21.919296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:39.496 [2024-12-05 13:58:21.919304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a45690) 00:26:39.496 [2024-12-05 13:58:21.919312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.496 [2024-12-05 13:58:21.919322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7100, cid 0, qid 0 00:26:39.496 [2024-12-05 13:58:21.919415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.496 [2024-12-05 13:58:21.919421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.496 [2024-12-05 13:58:21.919424] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919427] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a45690): datao=0, datal=4096, cccid=0 00:26:39.496 [2024-12-05 13:58:21.919431] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa7100) on tqpair(0x1a45690): expected_datao=0, payload_size=4096 00:26:39.496 [2024-12-05 13:58:21.919435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919456] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919460] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.496 [2024-12-05 13:58:21.919511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.496 [2024-12-05 13:58:21.919514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7100) on tqpair=0x1a45690 00:26:39.496 [2024-12-05 13:58:21.919524] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:26:39.496 [2024-12-05 13:58:21.919528] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:26:39.496 [2024-12-05 13:58:21.919532] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:26:39.496 [2024-12-05 13:58:21.919535] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:26:39.496 [2024-12-05 13:58:21.919540] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:26:39.496 [2024-12-05 13:58:21.919544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:26:39.496 [2024-12-05 13:58:21.919552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:39.496 [2024-12-05 13:58:21.919557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a45690) 00:26:39.496 [2024-12-05 13:58:21.919570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:39.496 [2024-12-05 13:58:21.919580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7100, cid 0, qid 0 00:26:39.496 [2024-12-05 13:58:21.919644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.496 [2024-12-05 13:58:21.919649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.496 [2024-12-05 13:58:21.919652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7100) on tqpair=0x1a45690 00:26:39.496 [2024-12-05 13:58:21.919661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.496 [2024-12-05 13:58:21.919667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a45690) 00:26:39.497 [2024-12-05 13:58:21.919673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.497 [2024-12-05 13:58:21.919678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.919681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.919684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a45690) 00:26:39.497 [2024-12-05 13:58:21.919689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.497 [2024-12-05 13:58:21.919694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.919697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.919700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a45690) 00:26:39.497 [2024-12-05 13:58:21.919705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.497 [2024-12-05 13:58:21.919710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.919713] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.919716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.497 [2024-12-05 13:58:21.919722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.497 [2024-12-05 13:58:21.919727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:39.497 [2024-12-05 13:58:21.919737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:39.497 [2024-12-05 13:58:21.919742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.919746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a45690) 00:26:39.497 [2024-12-05 13:58:21.919751] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-12-05 13:58:21.919762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7100, cid 0, qid 0 00:26:39.497 [2024-12-05 13:58:21.919767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7280, cid 1, qid 0 00:26:39.497 [2024-12-05 13:58:21.919771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7400, cid 2, qid 0 00:26:39.497 [2024-12-05 13:58:21.919775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.497 [2024-12-05 13:58:21.919779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7700, cid 4, qid 0 00:26:39.497 [2024-12-05 13:58:21.919873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.497 [2024-12-05 13:58:21.919879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.497 [2024-12-05 13:58:21.919882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.919885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7700) on tqpair=0x1a45690 00:26:39.497 [2024-12-05 13:58:21.919889] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:26:39.497 [2024-12-05 13:58:21.919893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:39.497 [2024-12-05 13:58:21.919902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:26:39.497 [2024-12-05 13:58:21.919908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:39.497 [2024-12-05 13:58:21.919913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.919917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.919920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a45690) 00:26:39.497 [2024-12-05 13:58:21.919925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:39.497 [2024-12-05 13:58:21.919935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7700, cid 4, qid 0 00:26:39.497 [2024-12-05 13:58:21.919996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.497 [2024-12-05 13:58:21.920002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.497 [2024-12-05 13:58:21.920005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.920008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7700) on tqpair=0x1a45690 00:26:39.497 [2024-12-05 13:58:21.920058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:26:39.497 [2024-12-05 13:58:21.920068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:39.497 [2024-12-05 13:58:21.920074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.920079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a45690) 00:26:39.497 [2024-12-05 13:58:21.920085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-12-05 13:58:21.920094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7700, cid 4, qid 0 00:26:39.497 [2024-12-05 13:58:21.920165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.497 [2024-12-05 13:58:21.920171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.497 [2024-12-05 13:58:21.920174] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.920177] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a45690): datao=0, datal=4096, cccid=4 00:26:39.497 [2024-12-05 13:58:21.920181] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa7700) on tqpair(0x1a45690): expected_datao=0, payload_size=4096 00:26:39.497 [2024-12-05 13:58:21.920185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.920197] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.920201] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.961505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.497 [2024-12-05 13:58:21.961515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.497 [2024-12-05 13:58:21.961518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.961521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7700) on tqpair=0x1a45690 00:26:39.497 [2024-12-05 13:58:21.961534] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:26:39.497 [2024-12-05 13:58:21.961546] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:26:39.497 [2024-12-05 13:58:21.961556] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:26:39.497 [2024-12-05 13:58:21.961563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.961567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a45690) 00:26:39.497 [2024-12-05 13:58:21.961573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.497 [2024-12-05 13:58:21.961585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7700, cid 4, qid 0 00:26:39.497 [2024-12-05 13:58:21.961688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.497 [2024-12-05 13:58:21.961693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.497 [2024-12-05 13:58:21.961696] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.961699] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a45690): datao=0, datal=4096, cccid=4 00:26:39.497 [2024-12-05 13:58:21.961703] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa7700) on tqpair(0x1a45690): expected_datao=0, payload_size=4096 00:26:39.497 [2024-12-05 13:58:21.961707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.961717] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:21.961721] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.497 [2024-12-05 13:58:22.003520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.497 [2024-12-05 13:58:22.003528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.498 [2024-12-05 13:58:22.003531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.003536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7700) on tqpair=0x1a45690 00:26:39.498 [2024-12-05 13:58:22.003545] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:39.498 [2024-12-05 13:58:22.003556] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:39.498 [2024-12-05 13:58:22.003563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.003567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a45690) 00:26:39.498 [2024-12-05 13:58:22.003573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-12-05 13:58:22.003585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7700, cid 4, qid 0 00:26:39.498 [2024-12-05 13:58:22.003685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.498 [2024-12-05 13:58:22.003690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.498 [2024-12-05 13:58:22.003693] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.003697] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a45690): datao=0, datal=4096, cccid=4 00:26:39.498 [2024-12-05 13:58:22.003700] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa7700) on tqpair(0x1a45690): expected_datao=0, payload_size=4096 00:26:39.498 [2024-12-05 13:58:22.003704] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.003715] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.003718] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.048374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.498 [2024-12-05 13:58:22.048382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.498 [2024-12-05 13:58:22.048385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.048389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7700) on tqpair=0x1a45690 00:26:39.498 [2024-12-05 13:58:22.048399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:39.498 [2024-12-05 13:58:22.048408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:26:39.498 [2024-12-05 13:58:22.048415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:26:39.498 [2024-12-05 13:58:22.048420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:39.498 [2024-12-05 13:58:22.048425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:39.498 [2024-12-05 13:58:22.048430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:26:39.498 [2024-12-05 13:58:22.048435] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:26:39.498 [2024-12-05 13:58:22.048439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:26:39.498 [2024-12-05 13:58:22.048444] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:26:39.498 [2024-12-05 13:58:22.048456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.048460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a45690) 00:26:39.498 [2024-12-05 13:58:22.048466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-12-05 13:58:22.048472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.048476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.048481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a45690) 00:26:39.498 [2024-12-05 13:58:22.048486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.498 [2024-12-05 13:58:22.048500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7700, cid 4, qid 0 00:26:39.498 [2024-12-05 13:58:22.048505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7880, cid 5, qid 0 00:26:39.498 [2024-12-05 13:58:22.048581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.498 [2024-12-05 13:58:22.048587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.498 [2024-12-05 13:58:22.048590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.048593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7700) on tqpair=0x1a45690 00:26:39.498 [2024-12-05 13:58:22.048599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.498 [2024-12-05 13:58:22.048604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.498 [2024-12-05 13:58:22.048607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.048610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7880) on tqpair=0x1a45690 00:26:39.498 [2024-12-05 13:58:22.048618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.048621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a45690) 00:26:39.498 [2024-12-05 13:58:22.048627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-12-05 13:58:22.048636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7880, cid 5, qid 0 00:26:39.498 [2024-12-05 13:58:22.048699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.498 [2024-12-05 13:58:22.048704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.498 [2024-12-05 13:58:22.048708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.048711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7880) on tqpair=0x1a45690 00:26:39.498 [2024-12-05 13:58:22.048718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.048722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a45690) 00:26:39.498 [2024-12-05 13:58:22.048727] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-12-05 13:58:22.048736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7880, cid 5, qid 0 00:26:39.498 [2024-12-05 13:58:22.048796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.498 [2024-12-05 13:58:22.048801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.498 [2024-12-05 13:58:22.048804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.048808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7880) on tqpair=0x1a45690 00:26:39.498 [2024-12-05 13:58:22.048815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.498 [2024-12-05 13:58:22.048819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a45690) 00:26:39.498 [2024-12-05 13:58:22.048824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.498 [2024-12-05 13:58:22.048833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7880, cid 5, qid 0 00:26:39.498 [2024-12-05 13:58:22.048897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.499 [2024-12-05 13:58:22.048902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.499 [2024-12-05 13:58:22.048906] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.048909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7880) on tqpair=0x1a45690 00:26:39.499 [2024-12-05 13:58:22.048926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.048930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a45690) 00:26:39.499 [2024-12-05 13:58:22.048935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.499 [2024-12-05 13:58:22.048941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.048945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a45690) 00:26:39.499 [2024-12-05 13:58:22.048950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.499 [2024-12-05 13:58:22.048956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.048959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a45690) 00:26:39.499 [2024-12-05 13:58:22.048964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.499 [2024-12-05 13:58:22.048970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.048974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a45690) 00:26:39.499 [2024-12-05 13:58:22.048979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.499 [2024-12-05 13:58:22.048990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7880, cid 5, qid 0 00:26:39.499 [2024-12-05 13:58:22.048994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7700, cid 4, qid 0 00:26:39.499 [2024-12-05 13:58:22.048999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7a00, cid 6, qid 0 00:26:39.499 [2024-12-05 13:58:22.049003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7b80, cid 7, qid 0 00:26:39.499 [2024-12-05 13:58:22.049133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.499 [2024-12-05 13:58:22.049139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.499 [2024-12-05 13:58:22.049142] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049145] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a45690): datao=0, datal=8192, cccid=5 00:26:39.499 [2024-12-05 13:58:22.049150] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa7880) on tqpair(0x1a45690): expected_datao=0, payload_size=8192 00:26:39.499 [2024-12-05 13:58:22.049153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049182] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049186] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.499 [2024-12-05 13:58:22.049195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.499 [2024-12-05 13:58:22.049198] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049201] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a45690): datao=0, datal=512, cccid=4 00:26:39.499 [2024-12-05 13:58:22.049206] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa7700) on tqpair(0x1a45690): expected_datao=0, payload_size=512 00:26:39.499 [2024-12-05 13:58:22.049209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049214] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049218] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.499 [2024-12-05 13:58:22.049227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.499 [2024-12-05 13:58:22.049232] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049236] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a45690): datao=0, datal=512, cccid=6 00:26:39.499 [2024-12-05 13:58:22.049240] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa7a00) on tqpair(0x1a45690): expected_datao=0, payload_size=512 00:26:39.499 [2024-12-05 13:58:22.049243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049249] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049252] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:39.499 [2024-12-05 13:58:22.049261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:39.499 [2024-12-05 13:58:22.049264] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049267] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a45690): datao=0, datal=4096, cccid=7 00:26:39.499 [2024-12-05 13:58:22.049271] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1aa7b80) on tqpair(0x1a45690): expected_datao=0, payload_size=4096 00:26:39.499 [2024-12-05 13:58:22.049275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049281] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049284] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.499 [2024-12-05 13:58:22.049296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.499 [2024-12-05 13:58:22.049299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7880) on tqpair=0x1a45690 00:26:39.499 [2024-12-05 13:58:22.049313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.499 [2024-12-05 13:58:22.049318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.499 [2024-12-05 13:58:22.049321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7700) on tqpair=0x1a45690 00:26:39.499 [2024-12-05 13:58:22.049333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.499 [2024-12-05 13:58:22.049338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.499 [2024-12-05 13:58:22.049341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7a00) on tqpair=0x1a45690 00:26:39.499 [2024-12-05 13:58:22.049350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.499 [2024-12-05 13:58:22.049355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.499 [2024-12-05 13:58:22.049358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.499 [2024-12-05 13:58:22.049362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7b80) on tqpair=0x1a45690 00:26:39.499 ===================================================== 00:26:39.499 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:39.499 ===================================================== 00:26:39.499 Controller Capabilities/Features 00:26:39.499 ================================ 00:26:39.499 Vendor ID: 8086 00:26:39.499 Subsystem Vendor ID: 8086 00:26:39.499 Serial Number: SPDK00000000000001 00:26:39.499 Model Number: SPDK bdev Controller 00:26:39.499 Firmware Version: 25.01 00:26:39.499 Recommended Arb Burst: 6 00:26:39.499 IEEE OUI Identifier: e4 d2 5c 00:26:39.499 Multi-path I/O 00:26:39.499 May have multiple subsystem ports: Yes 00:26:39.499 May have multiple controllers: Yes 00:26:39.499 Associated with SR-IOV VF: No 00:26:39.499 Max Data Transfer Size: 131072 00:26:39.499 Max Number of Namespaces: 32 00:26:39.499 Max Number of I/O Queues: 127 00:26:39.499 NVMe Specification Version (VS): 1.3 00:26:39.499 NVMe Specification Version (Identify): 1.3 00:26:39.499 Maximum Queue Entries: 128 00:26:39.499 Contiguous Queues Required: Yes 00:26:39.499 Arbitration Mechanisms Supported 00:26:39.500 Weighted Round Robin: Not Supported 00:26:39.500 Vendor Specific: Not Supported 00:26:39.500 Reset Timeout: 15000 ms 00:26:39.500 Doorbell Stride: 4 bytes 00:26:39.500 NVM Subsystem Reset: Not Supported 00:26:39.500 Command Sets Supported 00:26:39.500 NVM Command Set: Supported 00:26:39.500 Boot Partition: Not Supported 00:26:39.500 Memory Page Size Minimum: 4096 bytes 00:26:39.500 Memory Page Size Maximum: 4096 bytes 00:26:39.500 Persistent Memory Region: Not Supported 00:26:39.500 Optional Asynchronous Events Supported 00:26:39.500 Namespace Attribute Notices: Supported 00:26:39.500 Firmware Activation Notices: Not Supported 00:26:39.500 ANA Change Notices: Not Supported 00:26:39.500 PLE Aggregate Log Change Notices: Not Supported 00:26:39.500 LBA Status Info Alert Notices: Not Supported 00:26:39.500 EGE Aggregate Log Change Notices: Not Supported 00:26:39.500 Normal NVM Subsystem Shutdown event: Not Supported 00:26:39.500 Zone Descriptor Change Notices: Not Supported 00:26:39.500 Discovery Log Change Notices: Not Supported 00:26:39.500 Controller Attributes 00:26:39.500 128-bit Host Identifier: Supported 00:26:39.500 Non-Operational Permissive Mode: Not Supported 00:26:39.500 NVM Sets: Not Supported 00:26:39.500 Read Recovery Levels: Not Supported 00:26:39.500 Endurance Groups: Not Supported 00:26:39.500 Predictable Latency Mode: Not Supported 00:26:39.500 Traffic Based Keep ALive: Not Supported 00:26:39.500 Namespace Granularity: Not Supported 00:26:39.500 SQ Associations: Not Supported 00:26:39.500 UUID List: Not Supported 00:26:39.500 Multi-Domain Subsystem: Not Supported 00:26:39.500 Fixed Capacity Management: Not Supported 00:26:39.500 Variable Capacity Management: Not Supported 00:26:39.500 Delete Endurance Group: Not Supported 00:26:39.500 Delete NVM Set: Not Supported 00:26:39.500 Extended LBA Formats Supported: Not Supported 00:26:39.500 Flexible Data Placement Supported: Not Supported 00:26:39.500 00:26:39.500 Controller Memory Buffer Support 00:26:39.500 ================================ 00:26:39.500 Supported: No 00:26:39.500 00:26:39.500 Persistent Memory Region Support 00:26:39.500 ================================ 00:26:39.500 Supported: No 00:26:39.500 00:26:39.500 Admin Command Set Attributes 00:26:39.500 ============================ 00:26:39.500 Security Send/Receive: Not Supported 00:26:39.500 Format NVM: Not Supported 00:26:39.500 Firmware Activate/Download: Not Supported 00:26:39.500 Namespace Management: Not Supported 00:26:39.500 Device Self-Test: Not Supported 00:26:39.500 Directives: Not Supported 00:26:39.500 NVMe-MI: Not Supported 00:26:39.500 Virtualization Management: Not Supported 00:26:39.500 Doorbell Buffer Config: Not Supported 00:26:39.500 Get LBA Status Capability: Not Supported 00:26:39.500 Command & Feature Lockdown Capability: Not Supported 00:26:39.500 Abort Command Limit: 4 00:26:39.500 Async Event Request Limit: 4 00:26:39.500 Number of Firmware Slots: N/A 00:26:39.500 Firmware Slot 1 Read-Only: N/A 00:26:39.500 Firmware Activation Without Reset: N/A 00:26:39.500 Multiple Update Detection Support: N/A 00:26:39.500 Firmware Update Granularity: No Information Provided 00:26:39.500 Per-Namespace SMART Log: No 00:26:39.500 Asymmetric Namespace Access Log Page: Not Supported 00:26:39.500 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:39.500 Command Effects Log Page: Supported 00:26:39.500 Get Log Page Extended Data: Supported 00:26:39.500 Telemetry Log Pages: Not Supported 00:26:39.500 Persistent Event Log Pages: Not Supported 00:26:39.500 Supported Log Pages Log Page: May Support 00:26:39.500 Commands Supported & Effects Log Page: Not Supported 00:26:39.500 Feature Identifiers & Effects Log Page:May Support 00:26:39.500 NVMe-MI Commands & Effects Log Page: May Support 00:26:39.500 Data Area 4 for Telemetry Log: Not Supported 00:26:39.500 Error Log Page Entries Supported: 128 00:26:39.500 Keep Alive: Supported 00:26:39.500 Keep Alive Granularity: 10000 ms 00:26:39.500 00:26:39.500 NVM Command Set Attributes 00:26:39.500 ========================== 00:26:39.500 Submission Queue Entry Size 00:26:39.500 Max: 64 00:26:39.500 Min: 64 00:26:39.500 Completion Queue Entry Size 00:26:39.500 Max: 16 00:26:39.500 Min: 16 00:26:39.500 Number of Namespaces: 32 00:26:39.500 Compare Command: Supported 00:26:39.500 Write Uncorrectable Command: Not Supported 00:26:39.500 Dataset Management Command: Supported 00:26:39.500 Write Zeroes Command: Supported 00:26:39.500 Set Features Save Field: Not Supported 00:26:39.500 Reservations: Supported 00:26:39.500 Timestamp: Not Supported 00:26:39.500 Copy: Supported 00:26:39.500 Volatile Write Cache: Present 00:26:39.500 Atomic Write Unit (Normal): 1 00:26:39.500 Atomic Write Unit (PFail): 1 00:26:39.500 Atomic Compare & Write Unit: 1 00:26:39.500 Fused Compare & Write: Supported 00:26:39.500 Scatter-Gather List 00:26:39.500 SGL Command Set: Supported 00:26:39.500 SGL Keyed: Supported 00:26:39.500 SGL Bit Bucket Descriptor: Not Supported 00:26:39.500 SGL Metadata Pointer: Not Supported 00:26:39.500 Oversized SGL: Not Supported 00:26:39.500 SGL Metadata Address: Not Supported 00:26:39.500 SGL Offset: Supported 00:26:39.500 Transport SGL Data Block: Not Supported 00:26:39.500 Replay Protected Memory Block: Not Supported 00:26:39.500 00:26:39.500 Firmware Slot Information 00:26:39.500 ========================= 00:26:39.500 Active slot: 1 00:26:39.500 Slot 1 Firmware Revision: 25.01 00:26:39.500 00:26:39.500 00:26:39.500 Commands Supported and Effects 00:26:39.500 ============================== 00:26:39.500 Admin Commands 00:26:39.500 -------------- 00:26:39.500 Get Log Page (02h): Supported 00:26:39.500 Identify (06h): Supported 00:26:39.500 Abort (08h): Supported 00:26:39.500 Set Features (09h): Supported 00:26:39.500 Get Features (0Ah): Supported 00:26:39.500 Asynchronous Event Request (0Ch): Supported 00:26:39.500 Keep Alive (18h): Supported 00:26:39.500 I/O Commands 00:26:39.500 ------------ 00:26:39.500 Flush (00h): Supported LBA-Change 00:26:39.500 Write (01h): Supported LBA-Change 00:26:39.500 Read (02h): Supported 00:26:39.500 Compare (05h): Supported 00:26:39.500 Write Zeroes (08h): Supported LBA-Change 00:26:39.500 Dataset Management (09h): Supported LBA-Change 00:26:39.500 Copy (19h): Supported LBA-Change 00:26:39.501 00:26:39.501 Error Log 00:26:39.501 ========= 00:26:39.501 00:26:39.501 Arbitration 00:26:39.501 =========== 00:26:39.501 Arbitration Burst: 1 00:26:39.501 00:26:39.501 Power Management 00:26:39.501 ================ 00:26:39.501 Number of Power States: 1 00:26:39.501 Current Power State: Power State #0 00:26:39.501 Power State #0: 00:26:39.501 Max Power: 0.00 W 00:26:39.501 Non-Operational State: Operational 00:26:39.501 Entry Latency: Not Reported 00:26:39.501 Exit Latency: Not Reported 00:26:39.501 Relative Read Throughput: 0 00:26:39.501 Relative Read Latency: 0 00:26:39.501 Relative Write Throughput: 0 00:26:39.501 Relative Write Latency: 0 00:26:39.501 Idle Power: Not Reported 00:26:39.501 Active Power: Not Reported 00:26:39.501 Non-Operational Permissive Mode: Not Supported 00:26:39.501 00:26:39.501 Health Information 00:26:39.501 ================== 00:26:39.501 Critical Warnings: 00:26:39.501 Available Spare Space: OK 00:26:39.501 Temperature: OK 00:26:39.501 Device Reliability: OK 00:26:39.501 Read Only: No 00:26:39.501 Volatile Memory Backup: OK 00:26:39.501 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:39.501 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:39.501 Available Spare: 0% 00:26:39.501 Available Spare Threshold: 0% 00:26:39.501 Life Percentage Used:[2024-12-05 13:58:22.049446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.049451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a45690) 00:26:39.501 [2024-12-05 13:58:22.049457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.501 [2024-12-05 13:58:22.049469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7b80, cid 7, qid 0 00:26:39.501 [2024-12-05 13:58:22.049543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.501 [2024-12-05 13:58:22.049549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.501 [2024-12-05 13:58:22.049552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.049555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7b80) on tqpair=0x1a45690 00:26:39.501 [2024-12-05 13:58:22.049587] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:26:39.501 [2024-12-05 13:58:22.049596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7100) on tqpair=0x1a45690 00:26:39.501 [2024-12-05 13:58:22.049601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.501 [2024-12-05 13:58:22.049606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7280) on tqpair=0x1a45690 00:26:39.501 [2024-12-05 13:58:22.049610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.501 [2024-12-05 13:58:22.049614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7400) on tqpair=0x1a45690 00:26:39.501 [2024-12-05 13:58:22.049618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.501 [2024-12-05 13:58:22.049622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.501 [2024-12-05 13:58:22.049626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.501 [2024-12-05 13:58:22.049633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.049636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.049639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.501 [2024-12-05 13:58:22.049645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.501 [2024-12-05 13:58:22.049656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.501 [2024-12-05 13:58:22.049715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.501 [2024-12-05 13:58:22.049720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.501 [2024-12-05 13:58:22.049723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.049727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.501 [2024-12-05 13:58:22.049732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.049735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.049738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.501 [2024-12-05 13:58:22.049744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.501 [2024-12-05 13:58:22.049756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.501 [2024-12-05 13:58:22.049825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.501 [2024-12-05 13:58:22.049831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.501 [2024-12-05 13:58:22.049833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.049837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.501 [2024-12-05 13:58:22.049841] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:26:39.501 [2024-12-05 13:58:22.049845] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:26:39.501 [2024-12-05 13:58:22.049852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.049856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.049859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.501 [2024-12-05 13:58:22.049864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.501 [2024-12-05 13:58:22.049873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.501 [2024-12-05 13:58:22.049934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.501 [2024-12-05 13:58:22.049940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.501 [2024-12-05 13:58:22.049943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.049947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.501 [2024-12-05 13:58:22.049955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.049959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.049962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.501 [2024-12-05 13:58:22.049967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.501 [2024-12-05 13:58:22.049976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.501 [2024-12-05 13:58:22.050034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.501 [2024-12-05 13:58:22.050040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.501 [2024-12-05 13:58:22.050043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.050046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.501 [2024-12-05 13:58:22.050054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.050058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.501 [2024-12-05 13:58:22.050061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.501 [2024-12-05 13:58:22.050066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.501 [2024-12-05 13:58:22.050075] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.501 [2024-12-05 13:58:22.050144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.501 [2024-12-05 13:58:22.050150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.502 [2024-12-05 13:58:22.050153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.502 [2024-12-05 13:58:22.050164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.502 [2024-12-05 13:58:22.050176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.502 [2024-12-05 13:58:22.050185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.502 [2024-12-05 13:58:22.050244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.502 [2024-12-05 13:58:22.050250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.502 [2024-12-05 13:58:22.050253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.502 [2024-12-05 13:58:22.050264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.502 [2024-12-05 13:58:22.050276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.502 [2024-12-05 13:58:22.050285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.502 [2024-12-05 13:58:22.050343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.502 [2024-12-05 13:58:22.050349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.502 [2024-12-05 13:58:22.050352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.502 [2024-12-05 13:58:22.050363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.502 [2024-12-05 13:58:22.050380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.502 [2024-12-05 13:58:22.050390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.502 [2024-12-05 13:58:22.050453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.502 [2024-12-05 13:58:22.050458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.502 [2024-12-05 13:58:22.050461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.502 [2024-12-05 13:58:22.050473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.502 [2024-12-05 13:58:22.050485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.502 [2024-12-05 13:58:22.050494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.502 [2024-12-05 13:58:22.050551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.502 [2024-12-05 13:58:22.050556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.502 [2024-12-05 13:58:22.050559] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050563] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.502 [2024-12-05 13:58:22.050571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.502 [2024-12-05 13:58:22.050583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.502 [2024-12-05 13:58:22.050592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.502 [2024-12-05 13:58:22.050649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.502 [2024-12-05 13:58:22.050655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.502 [2024-12-05 13:58:22.050658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.502 [2024-12-05 13:58:22.050669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050673] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.502 [2024-12-05 13:58:22.050681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.502 [2024-12-05 13:58:22.050690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.502 [2024-12-05 13:58:22.050751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.502 [2024-12-05 13:58:22.050757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.502 [2024-12-05 13:58:22.050761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.502 [2024-12-05 13:58:22.050773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.502 [2024-12-05 13:58:22.050785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.502 [2024-12-05 13:58:22.050794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.502 [2024-12-05 13:58:22.050852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.502 [2024-12-05 13:58:22.050858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.502 [2024-12-05 13:58:22.050861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.502 [2024-12-05 13:58:22.050872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.502 [2024-12-05 13:58:22.050884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.502 [2024-12-05 13:58:22.050893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.502 [2024-12-05 13:58:22.050953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.502 [2024-12-05 13:58:22.050959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.502 [2024-12-05 13:58:22.050962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.502 [2024-12-05 13:58:22.050973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.050980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.502 [2024-12-05 13:58:22.050985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.502 [2024-12-05 13:58:22.050994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.502 [2024-12-05 13:58:22.051053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.502 [2024-12-05 13:58:22.051059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.502 [2024-12-05 13:58:22.051062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.051065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.502 [2024-12-05 13:58:22.051073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.502 [2024-12-05 13:58:22.051077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.503 [2024-12-05 13:58:22.051085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.503 [2024-12-05 13:58:22.051094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.503 [2024-12-05 13:58:22.051155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.503 [2024-12-05 13:58:22.051161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.503 [2024-12-05 13:58:22.051164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.503 [2024-12-05 13:58:22.051177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051180] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.503 [2024-12-05 13:58:22.051189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.503 [2024-12-05 13:58:22.051197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.503 [2024-12-05 13:58:22.051266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.503 [2024-12-05 13:58:22.051272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.503 [2024-12-05 13:58:22.051275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.503 [2024-12-05 13:58:22.051286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.503 [2024-12-05 13:58:22.051298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.503 [2024-12-05 13:58:22.051307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.503 [2024-12-05 13:58:22.051363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.503 [2024-12-05 13:58:22.051372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.503 [2024-12-05 13:58:22.051375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.503 [2024-12-05 13:58:22.051386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.503 [2024-12-05 13:58:22.051398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.503 [2024-12-05 13:58:22.051407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.503 [2024-12-05 13:58:22.051466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.503 [2024-12-05 13:58:22.051472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.503 [2024-12-05 13:58:22.051475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.503 [2024-12-05 13:58:22.051487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.503 [2024-12-05 13:58:22.051499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.503 [2024-12-05 13:58:22.051509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.503 [2024-12-05 13:58:22.051575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.503 [2024-12-05 13:58:22.051580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.503 [2024-12-05 13:58:22.051583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.503 [2024-12-05 13:58:22.051597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.503 [2024-12-05 13:58:22.051609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.503 [2024-12-05 13:58:22.051618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.503 [2024-12-05 13:58:22.051675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.503 [2024-12-05 13:58:22.051681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.503 [2024-12-05 13:58:22.051684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.503 [2024-12-05 13:58:22.051695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.503 [2024-12-05 13:58:22.051707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.503 [2024-12-05 13:58:22.051716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.503 [2024-12-05 13:58:22.051775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.503 [2024-12-05 13:58:22.051780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.503 [2024-12-05 13:58:22.051783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.503 [2024-12-05 13:58:22.051794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.503 [2024-12-05 13:58:22.051806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.503 [2024-12-05 13:58:22.051815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.503 [2024-12-05 13:58:22.051877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.503 [2024-12-05 13:58:22.051882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.503 [2024-12-05 13:58:22.051885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.503 [2024-12-05 13:58:22.051896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.503 [2024-12-05 13:58:22.051909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.503 [2024-12-05 13:58:22.051917] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.503 [2024-12-05 13:58:22.051983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.503 [2024-12-05 13:58:22.051989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.503 [2024-12-05 13:58:22.051991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.051995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.503 [2024-12-05 13:58:22.052003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.052008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.503 [2024-12-05 13:58:22.052011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.503 [2024-12-05 13:58:22.052017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.503 [2024-12-05 13:58:22.052027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.504 [2024-12-05 13:58:22.052092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.504 [2024-12-05 13:58:22.052098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.504 [2024-12-05 13:58:22.052101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.504 [2024-12-05 13:58:22.052104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.504 [2024-12-05 13:58:22.052112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.504 [2024-12-05 13:58:22.052116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.504 [2024-12-05 13:58:22.052119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.504 [2024-12-05 13:58:22.052124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.504 [2024-12-05 13:58:22.052134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.504 [2024-12-05 13:58:22.052196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.504 [2024-12-05 13:58:22.052201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.504 [2024-12-05 13:58:22.052204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.504 [2024-12-05 13:58:22.052208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.504 [2024-12-05 13:58:22.052216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.504 [2024-12-05 13:58:22.052219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.504 [2024-12-05 13:58:22.052222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.504 [2024-12-05 13:58:22.052228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.504 [2024-12-05 13:58:22.052236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.504 [2024-12-05 13:58:22.052301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.504 [2024-12-05 13:58:22.052306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.504 [2024-12-05 13:58:22.052309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.504 [2024-12-05 13:58:22.052313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.504 [2024-12-05 13:58:22.052321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.504 [2024-12-05 13:58:22.052324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.504 [2024-12-05 13:58:22.052327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.504 [2024-12-05 13:58:22.052333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.504 [2024-12-05 13:58:22.052342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.504 [2024-12-05 13:58:22.056376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.504 [2024-12-05 13:58:22.056383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.504 [2024-12-05 13:58:22.056386] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.504 [2024-12-05 13:58:22.056389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.504 [2024-12-05 13:58:22.056399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:39.504 [2024-12-05 13:58:22.056403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:39.504 [2024-12-05 13:58:22.056408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a45690) 00:26:39.504 [2024-12-05 13:58:22.056414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.504 [2024-12-05 13:58:22.056425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1aa7580, cid 3, qid 0 00:26:39.504 [2024-12-05 13:58:22.056551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:39.504 [2024-12-05 13:58:22.056556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:39.504 [2024-12-05 13:58:22.056559] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:39.504 [2024-12-05 13:58:22.056563] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1aa7580) on tqpair=0x1a45690 00:26:39.504 [2024-12-05 13:58:22.056570] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:26:39.763 0% 00:26:39.763 Data Units Read: 0 00:26:39.763 Data Units Written: 0 00:26:39.763 Host Read Commands: 0 00:26:39.763 Host Write Commands: 0 00:26:39.763 Controller Busy Time: 0 minutes 00:26:39.763 Power Cycles: 0 00:26:39.763 Power On Hours: 0 hours 00:26:39.763 Unsafe Shutdowns: 0 00:26:39.763 Unrecoverable Media Errors: 0 00:26:39.763 Lifetime Error Log Entries: 0 00:26:39.763 Warning Temperature Time: 0 minutes 00:26:39.763 Critical Temperature Time: 0 minutes 00:26:39.763 00:26:39.763 Number of Queues 00:26:39.763 ================ 00:26:39.763 Number of I/O Submission Queues: 127 00:26:39.763 Number of I/O Completion Queues: 127 00:26:39.763 00:26:39.763 Active Namespaces 00:26:39.763 ================= 00:26:39.763 Namespace ID:1 00:26:39.763 Error Recovery Timeout: Unlimited 00:26:39.763 Command Set Identifier: NVM (00h) 00:26:39.763 Deallocate: Supported 00:26:39.763 Deallocated/Unwritten Error: Not Supported 00:26:39.763 Deallocated Read Value: Unknown 00:26:39.763 Deallocate in Write Zeroes: Not Supported 00:26:39.763 Deallocated Guard Field: 0xFFFF 00:26:39.763 Flush: Supported 00:26:39.763 Reservation: Supported 00:26:39.763 Namespace Sharing Capabilities: Multiple Controllers 00:26:39.763 Size (in LBAs): 131072 (0GiB) 00:26:39.763 Capacity (in LBAs): 131072 (0GiB) 00:26:39.763 Utilization (in LBAs): 131072 (0GiB) 00:26:39.763 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:39.763 EUI64: ABCDEF0123456789 00:26:39.764 UUID: d848513d-0b10-47f7-8fd5-7a85da7b5d42 00:26:39.764 Thin Provisioning: Not Supported 00:26:39.764 Per-NS Atomic Units: Yes 00:26:39.764 Atomic Boundary Size (Normal): 0 00:26:39.764 Atomic Boundary Size (PFail): 0 00:26:39.764 Atomic Boundary Offset: 0 00:26:39.764 Maximum Single Source Range Length: 65535 00:26:39.764 Maximum Copy Length: 65535 00:26:39.764 Maximum Source Range Count: 1 00:26:39.764 NGUID/EUI64 Never Reused: No 00:26:39.764 Namespace Write Protected: No 00:26:39.764 Number of LBA Formats: 1 00:26:39.764 Current LBA Format: LBA Format #00 00:26:39.764 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:39.764 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.764 rmmod nvme_tcp 00:26:39.764 rmmod nvme_fabrics 00:26:39.764 rmmod nvme_keyring 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 748670 ']' 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 748670 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 748670 ']' 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 748670 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 748670 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 748670' 00:26:39.764 killing process with pid 748670 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 748670 00:26:39.764 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 748670 00:26:40.023 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:40.023 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:40.023 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:40.023 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:26:40.023 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:26:40.023 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:40.023 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:26:40.023 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:40.023 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:40.023 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.023 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.023 13:58:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.926 13:58:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:41.926 00:26:41.926 real 0m9.455s 00:26:41.926 user 0m5.896s 00:26:41.926 sys 0m4.861s 00:26:41.926 13:58:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.926 13:58:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:41.926 ************************************ 00:26:41.926 END TEST nvmf_identify 00:26:41.926 ************************************ 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.184 ************************************ 00:26:42.184 START TEST nvmf_perf 00:26:42.184 ************************************ 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:42.184 * Looking for test storage... 00:26:42.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.184 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:42.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.184 --rc genhtml_branch_coverage=1 00:26:42.184 --rc genhtml_function_coverage=1 00:26:42.184 --rc genhtml_legend=1 00:26:42.184 --rc geninfo_all_blocks=1 00:26:42.184 --rc geninfo_unexecuted_blocks=1 00:26:42.184 00:26:42.184 ' 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:42.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.185 --rc genhtml_branch_coverage=1 00:26:42.185 --rc genhtml_function_coverage=1 00:26:42.185 --rc genhtml_legend=1 00:26:42.185 --rc geninfo_all_blocks=1 00:26:42.185 --rc geninfo_unexecuted_blocks=1 00:26:42.185 00:26:42.185 ' 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:42.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.185 --rc genhtml_branch_coverage=1 00:26:42.185 --rc genhtml_function_coverage=1 00:26:42.185 --rc genhtml_legend=1 00:26:42.185 --rc geninfo_all_blocks=1 00:26:42.185 --rc geninfo_unexecuted_blocks=1 00:26:42.185 00:26:42.185 ' 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:42.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.185 --rc genhtml_branch_coverage=1 00:26:42.185 --rc genhtml_function_coverage=1 00:26:42.185 --rc genhtml_legend=1 00:26:42.185 --rc geninfo_all_blocks=1 00:26:42.185 --rc geninfo_unexecuted_blocks=1 00:26:42.185 00:26:42.185 ' 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:42.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:42.185 13:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:48.749 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:48.749 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.749 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:48.750 Found net devices under 0000:86:00.0: cvl_0_0 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:48.750 Found net devices under 0000:86:00.1: cvl_0_1 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:26:48.750 00:26:48.750 --- 10.0.0.2 ping statistics --- 00:26:48.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.750 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:26:48.750 00:26:48.750 --- 10.0.0.1 ping statistics --- 00:26:48.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.750 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=752300 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 752300 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 752300 ']' 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.750 13:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:48.750 [2024-12-05 13:58:30.757709] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:26:48.750 [2024-12-05 13:58:30.757762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.750 [2024-12-05 13:58:30.838201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:48.750 [2024-12-05 13:58:30.882107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.750 [2024-12-05 13:58:30.882144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.750 [2024-12-05 13:58:30.882151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.750 [2024-12-05 13:58:30.882160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.750 [2024-12-05 13:58:30.882165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.750 [2024-12-05 13:58:30.883598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.750 [2024-12-05 13:58:30.883705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.750 [2024-12-05 13:58:30.883810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.750 [2024-12-05 13:58:30.883811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:49.010 13:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:49.010 13:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:26:49.269 13:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:49.269 13:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:49.269 13:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:49.269 13:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.269 13:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:49.269 13:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:52.556 13:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:52.557 13:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:52.557 13:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:26:52.557 13:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:52.557 13:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:52.557 13:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:26:52.557 13:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:52.557 13:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:52.557 13:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:52.815 [2024-12-05 13:58:35.256261] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.815 13:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:53.073 13:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:53.073 13:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:53.330 13:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:53.331 13:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:53.588 13:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.588 [2024-12-05 13:58:36.079292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.588 13:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:53.846 13:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:26:53.846 13:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:26:53.846 13:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:53.847 13:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:26:55.222 Initializing NVMe Controllers 00:26:55.222 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:26:55.222 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:26:55.222 Initialization complete. Launching workers. 00:26:55.222 ======================================================== 00:26:55.222 Latency(us) 00:26:55.222 Device Information : IOPS MiB/s Average min max 00:26:55.222 PCIE (0000:5e:00.0) NSID 1 from core 0: 98175.68 383.50 325.36 39.55 4778.28 00:26:55.222 ======================================================== 00:26:55.222 Total : 98175.68 383.50 325.36 39.55 4778.28 00:26:55.222 00:26:55.222 13:58:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:56.637 Initializing NVMe Controllers 00:26:56.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:56.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:56.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:56.637 Initialization complete. Launching workers. 00:26:56.637 ======================================================== 00:26:56.637 Latency(us) 00:26:56.637 Device Information : IOPS MiB/s Average min max 00:26:56.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.00 0.32 12640.26 104.95 45687.67 00:26:56.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 52.00 0.20 19513.63 7203.70 50876.72 00:26:56.637 ======================================================== 00:26:56.638 Total : 133.00 0.52 15327.60 104.95 50876.72 00:26:56.638 00:26:56.638 13:58:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:57.572 Initializing NVMe Controllers 00:26:57.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:57.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:57.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:57.572 Initialization complete. Launching workers. 00:26:57.572 ======================================================== 00:26:57.572 Latency(us) 00:26:57.572 Device Information : IOPS MiB/s Average min max 00:26:57.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11305.00 44.16 2834.20 491.16 6722.80 00:26:57.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3863.00 15.09 8318.71 4474.77 15992.91 00:26:57.572 ======================================================== 00:26:57.572 Total : 15168.00 59.25 4231.00 491.16 15992.91 00:26:57.572 00:26:57.572 13:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:57.572 13:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:57.572 13:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:00.102 Initializing NVMe Controllers 00:27:00.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:00.102 Controller IO queue size 128, less than required. 00:27:00.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:00.102 Controller IO queue size 128, less than required. 00:27:00.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:00.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:00.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:00.102 Initialization complete. Launching workers. 00:27:00.102 ======================================================== 00:27:00.102 Latency(us) 00:27:00.102 Device Information : IOPS MiB/s Average min max 00:27:00.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1834.85 458.71 71114.34 48266.67 111148.62 00:27:00.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 590.45 147.61 220147.12 86811.30 325441.62 00:27:00.102 ======================================================== 00:27:00.102 Total : 2425.30 606.33 107397.11 48266.67 325441.62 00:27:00.102 00:27:00.102 13:58:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:00.361 No valid NVMe controllers or AIO or URING devices found 00:27:00.361 Initializing NVMe Controllers 00:27:00.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:00.361 Controller IO queue size 128, less than required. 00:27:00.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:00.361 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:00.361 Controller IO queue size 128, less than required. 00:27:00.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:00.361 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:00.361 WARNING: Some requested NVMe devices were skipped 00:27:00.361 13:58:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:02.894 Initializing NVMe Controllers 00:27:02.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.894 Controller IO queue size 128, less than required. 00:27:02.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.894 Controller IO queue size 128, less than required. 00:27:02.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:02.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:02.894 Initialization complete. Launching workers. 00:27:02.894 00:27:02.894 ==================== 00:27:02.894 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:02.895 TCP transport: 00:27:02.895 polls: 13088 00:27:02.895 idle_polls: 9307 00:27:02.895 sock_completions: 3781 00:27:02.895 nvme_completions: 6209 00:27:02.895 submitted_requests: 9250 00:27:02.895 queued_requests: 1 00:27:02.895 00:27:02.895 ==================== 00:27:02.895 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:02.895 TCP transport: 00:27:02.895 polls: 13004 00:27:02.895 idle_polls: 8702 00:27:02.895 sock_completions: 4302 00:27:02.895 nvme_completions: 6619 00:27:02.895 submitted_requests: 9906 00:27:02.895 queued_requests: 1 00:27:02.895 ======================================================== 00:27:02.895 Latency(us) 00:27:02.895 Device Information : IOPS MiB/s Average min max 00:27:02.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1550.96 387.74 85154.06 65846.94 135969.50 00:27:02.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1653.39 413.35 78013.97 45928.32 120024.47 00:27:02.895 ======================================================== 00:27:02.895 Total : 3204.35 801.09 81469.89 45928.32 135969.50 00:27:02.895 00:27:03.153 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:03.153 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:03.153 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:27:03.153 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:03.153 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:27:03.153 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:03.153 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:27:03.153 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:03.153 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:27:03.153 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:03.153 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:03.153 rmmod nvme_tcp 00:27:03.411 rmmod nvme_fabrics 00:27:03.411 rmmod nvme_keyring 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 752300 ']' 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 752300 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 752300 ']' 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 752300 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 752300 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 752300' 00:27:03.411 killing process with pid 752300 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 752300 00:27:03.411 13:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 752300 00:27:05.943 13:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:05.943 13:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:05.943 13:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:05.943 13:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:27:05.943 13:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:27:05.943 13:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:05.943 13:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:27:05.943 13:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:05.943 13:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:05.943 13:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.943 13:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.943 13:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:07.851 00:27:07.851 real 0m25.462s 00:27:07.851 user 1m7.891s 00:27:07.851 sys 0m8.389s 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:07.851 ************************************ 00:27:07.851 END TEST nvmf_perf 00:27:07.851 ************************************ 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.851 ************************************ 00:27:07.851 START TEST nvmf_fio_host 00:27:07.851 ************************************ 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:07.851 * Looking for test storage... 00:27:07.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:07.851 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:07.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.852 --rc genhtml_branch_coverage=1 00:27:07.852 --rc genhtml_function_coverage=1 00:27:07.852 --rc genhtml_legend=1 00:27:07.852 --rc geninfo_all_blocks=1 00:27:07.852 --rc geninfo_unexecuted_blocks=1 00:27:07.852 00:27:07.852 ' 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:07.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.852 --rc genhtml_branch_coverage=1 00:27:07.852 --rc genhtml_function_coverage=1 00:27:07.852 --rc genhtml_legend=1 00:27:07.852 --rc geninfo_all_blocks=1 00:27:07.852 --rc geninfo_unexecuted_blocks=1 00:27:07.852 00:27:07.852 ' 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:07.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.852 --rc genhtml_branch_coverage=1 00:27:07.852 --rc genhtml_function_coverage=1 00:27:07.852 --rc genhtml_legend=1 00:27:07.852 --rc geninfo_all_blocks=1 00:27:07.852 --rc geninfo_unexecuted_blocks=1 00:27:07.852 00:27:07.852 ' 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:07.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.852 --rc genhtml_branch_coverage=1 00:27:07.852 --rc genhtml_function_coverage=1 00:27:07.852 --rc genhtml_legend=1 00:27:07.852 --rc geninfo_all_blocks=1 00:27:07.852 --rc geninfo_unexecuted_blocks=1 00:27:07.852 00:27:07.852 ' 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.852 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:07.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:07.853 13:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.421 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:14.422 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:14.422 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:14.422 Found net devices under 0000:86:00.0: cvl_0_0 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:14.422 Found net devices under 0000:86:00.1: cvl_0_1 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.422 13:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:14.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:27:14.422 00:27:14.422 --- 10.0.0.2 ping statistics --- 00:27:14.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.422 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:27:14.422 00:27:14.422 --- 10.0.0.1 ping statistics --- 00:27:14.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.422 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=758626 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 758626 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 758626 ']' 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.422 13:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.422 [2024-12-05 13:58:56.289792] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:27:14.422 [2024-12-05 13:58:56.289841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.422 [2024-12-05 13:58:56.369923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:14.423 [2024-12-05 13:58:56.411758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.423 [2024-12-05 13:58:56.411797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.423 [2024-12-05 13:58:56.411804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.423 [2024-12-05 13:58:56.411810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.423 [2024-12-05 13:58:56.411815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.423 [2024-12-05 13:58:56.413382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.423 [2024-12-05 13:58:56.413472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:14.423 [2024-12-05 13:58:56.413585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.423 [2024-12-05 13:58:56.413586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:14.681 13:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.681 13:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:27:14.681 13:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:14.940 [2024-12-05 13:58:57.310629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.940 13:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:14.940 13:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:14.940 13:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.940 13:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:15.199 Malloc1 00:27:15.199 13:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.199 13:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:15.458 13:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.717 [2024-12-05 13:58:58.139087] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.717 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:16.022 13:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.396 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:16.396 fio-3.35 00:27:16.396 Starting 1 thread 00:27:18.929 00:27:18.929 test: (groupid=0, jobs=1): err= 0: pid=759227: Thu Dec 5 13:59:00 2024 00:27:18.929 read: IOPS=11.9k, BW=46.5MiB/s (48.7MB/s)(93.1MiB/2005msec) 00:27:18.929 slat (nsec): min=1532, max=253350, avg=1741.75, stdev=2255.21 00:27:18.929 clat (usec): min=3283, max=9762, avg=5926.32, stdev=473.44 00:27:18.929 lat (usec): min=3317, max=9763, avg=5928.06, stdev=473.39 00:27:18.929 clat percentiles (usec): 00:27:18.929 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5538], 00:27:18.929 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:27:18.929 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:27:18.929 | 99.00th=[ 6980], 99.50th=[ 7177], 99.90th=[ 8356], 99.95th=[ 9110], 00:27:18.929 | 99.99th=[ 9634] 00:27:18.929 bw ( KiB/s): min=46768, max=48240, per=99.96%, avg=47550.00, stdev=666.85, samples=4 00:27:18.929 iops : min=11692, max=12060, avg=11887.50, stdev=166.71, samples=4 00:27:18.929 write: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(92.7MiB/2005msec); 0 zone resets 00:27:18.929 slat (nsec): min=1562, max=226808, avg=1791.32, stdev=1648.08 00:27:18.929 clat (usec): min=2530, max=9536, avg=4821.94, stdev=390.80 00:27:18.929 lat (usec): min=2545, max=9538, avg=4823.73, stdev=390.84 00:27:18.929 clat percentiles (usec): 00:27:18.929 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:27:18.929 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:27:18.929 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:27:18.929 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 7570], 99.95th=[ 8979], 00:27:18.929 | 99.99th=[ 9503] 00:27:18.929 bw ( KiB/s): min=47048, max=47552, per=100.00%, avg=47350.00, stdev=221.64, samples=4 00:27:18.929 iops : min=11762, max=11888, avg=11837.50, stdev=55.41, samples=4 00:27:18.929 lat (msec) : 4=0.71%, 10=99.29% 00:27:18.929 cpu : usr=74.85%, sys=24.15%, ctx=118, majf=0, minf=2 00:27:18.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:18.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:18.929 issued rwts: total=23843,23734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:18.929 00:27:18.929 Run status group 0 (all jobs): 00:27:18.929 READ: bw=46.5MiB/s (48.7MB/s), 46.5MiB/s-46.5MiB/s (48.7MB/s-48.7MB/s), io=93.1MiB (97.7MB), run=2005-2005msec 00:27:18.929 WRITE: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2005-2005msec 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:18.929 13:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:18.929 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:18.929 fio-3.35 00:27:18.929 Starting 1 thread 00:27:20.300 [2024-12-05 13:59:02.660185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe25380 is same with the state(6) to be set 00:27:20.300 [2024-12-05 13:59:02.660248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe25380 is same with the state(6) to be set 00:27:21.235 00:27:21.235 test: (groupid=0, jobs=1): err= 0: pid=759803: Thu Dec 5 13:59:03 2024 00:27:21.235 read: IOPS=11.0k, BW=172MiB/s (181MB/s)(345MiB/2006msec) 00:27:21.235 slat (nsec): min=2481, max=99811, avg=2792.36, stdev=1296.90 00:27:21.235 clat (usec): min=2143, max=12727, avg=6706.83, stdev=1599.73 00:27:21.235 lat (usec): min=2146, max=12741, avg=6709.62, stdev=1599.86 00:27:21.235 clat percentiles (usec): 00:27:21.235 | 1.00th=[ 3589], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5276], 00:27:21.235 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7111], 00:27:21.235 | 70.00th=[ 7504], 80.00th=[ 8029], 90.00th=[ 8717], 95.00th=[ 9503], 00:27:21.235 | 99.00th=[10814], 99.50th=[11469], 99.90th=[12125], 99.95th=[12387], 00:27:21.235 | 99.99th=[12649] 00:27:21.235 bw ( KiB/s): min=88096, max=94208, per=50.94%, avg=89808.00, stdev=2939.88, samples=4 00:27:21.235 iops : min= 5506, max= 5888, avg=5613.00, stdev=183.74, samples=4 00:27:21.235 write: IOPS=6472, BW=101MiB/s (106MB/s)(184MiB/1815msec); 0 zone resets 00:27:21.235 slat (usec): min=29, max=384, avg=31.32, stdev= 7.46 00:27:21.235 clat (usec): min=3435, max=14608, avg=8576.83, stdev=1486.18 00:27:21.235 lat (usec): min=3465, max=14719, avg=8608.14, stdev=1487.78 00:27:21.235 clat percentiles (usec): 00:27:21.235 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7373], 00:27:21.235 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:27:21.235 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11338], 00:27:21.236 | 99.00th=[12649], 99.50th=[13042], 99.90th=[14091], 99.95th=[14353], 00:27:21.236 | 99.99th=[14484] 00:27:21.236 bw ( KiB/s): min=91232, max=98304, per=90.18%, avg=93384.00, stdev=3302.50, samples=4 00:27:21.236 iops : min= 5702, max= 6144, avg=5836.50, stdev=206.41, samples=4 00:27:21.236 lat (msec) : 4=2.24%, 10=89.48%, 20=8.28% 00:27:21.236 cpu : usr=86.68%, sys=12.67%, ctx=30, majf=0, minf=2 00:27:21.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:21.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:21.236 issued rwts: total=22102,11747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:21.236 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:21.236 00:27:21.236 Run status group 0 (all jobs): 00:27:21.236 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=345MiB (362MB), run=2006-2006msec 00:27:21.236 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=184MiB (192MB), run=1815-1815msec 00:27:21.236 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:21.495 rmmod nvme_tcp 00:27:21.495 rmmod nvme_fabrics 00:27:21.495 rmmod nvme_keyring 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 758626 ']' 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 758626 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 758626 ']' 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 758626 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:21.495 13:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 758626 00:27:21.495 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:21.495 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:21.495 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 758626' 00:27:21.495 killing process with pid 758626 00:27:21.495 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 758626 00:27:21.495 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 758626 00:27:21.755 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:21.755 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:21.755 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:21.755 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:27:21.755 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:27:21.755 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:21.755 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:21.755 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.755 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:21.755 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.755 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.755 13:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:24.305 00:27:24.305 real 0m16.198s 00:27:24.305 user 0m48.109s 00:27:24.305 sys 0m6.468s 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.305 ************************************ 00:27:24.305 END TEST nvmf_fio_host 00:27:24.305 ************************************ 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.305 ************************************ 00:27:24.305 START TEST nvmf_failover 00:27:24.305 ************************************ 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:24.305 * Looking for test storage... 00:27:24.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:24.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.305 --rc genhtml_branch_coverage=1 00:27:24.305 --rc genhtml_function_coverage=1 00:27:24.305 --rc genhtml_legend=1 00:27:24.305 --rc geninfo_all_blocks=1 00:27:24.305 --rc geninfo_unexecuted_blocks=1 00:27:24.305 00:27:24.305 ' 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:24.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.305 --rc genhtml_branch_coverage=1 00:27:24.305 --rc genhtml_function_coverage=1 00:27:24.305 --rc genhtml_legend=1 00:27:24.305 --rc geninfo_all_blocks=1 00:27:24.305 --rc geninfo_unexecuted_blocks=1 00:27:24.305 00:27:24.305 ' 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:24.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.305 --rc genhtml_branch_coverage=1 00:27:24.305 --rc genhtml_function_coverage=1 00:27:24.305 --rc genhtml_legend=1 00:27:24.305 --rc geninfo_all_blocks=1 00:27:24.305 --rc geninfo_unexecuted_blocks=1 00:27:24.305 00:27:24.305 ' 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:24.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.305 --rc genhtml_branch_coverage=1 00:27:24.305 --rc genhtml_function_coverage=1 00:27:24.305 --rc genhtml_legend=1 00:27:24.305 --rc geninfo_all_blocks=1 00:27:24.305 --rc geninfo_unexecuted_blocks=1 00:27:24.305 00:27:24.305 ' 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.305 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:24.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:27:24.306 13:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:30.874 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:30.874 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.874 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:30.875 Found net devices under 0000:86:00.0: cvl_0_0 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:30.875 Found net devices under 0000:86:00.1: cvl_0_1 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:30.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:27:30.875 00:27:30.875 --- 10.0.0.2 ping statistics --- 00:27:30.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.875 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:27:30.875 00:27:30.875 --- 10.0.0.1 ping statistics --- 00:27:30.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.875 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=763587 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 763587 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 763587 ']' 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:30.875 [2024-12-05 13:59:12.529332] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:27:30.875 [2024-12-05 13:59:12.529390] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.875 [2024-12-05 13:59:12.609234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:30.875 [2024-12-05 13:59:12.653956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.875 [2024-12-05 13:59:12.653988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.875 [2024-12-05 13:59:12.653996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.875 [2024-12-05 13:59:12.654002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.875 [2024-12-05 13:59:12.654007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.875 [2024-12-05 13:59:12.655315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.875 [2024-12-05 13:59:12.655422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.875 [2024-12-05 13:59:12.655423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:30.875 [2024-12-05 13:59:12.952037] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.875 13:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:30.875 Malloc0 00:27:30.875 13:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:30.875 13:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:31.133 13:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.390 [2024-12-05 13:59:13.737835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.390 13:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:31.390 [2024-12-05 13:59:13.938360] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:31.390 13:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:31.648 [2024-12-05 13:59:14.147072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:31.648 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=764019 00:27:31.648 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:31.648 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:31.648 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 764019 /var/tmp/bdevperf.sock 00:27:31.648 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 764019 ']' 00:27:31.648 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:31.648 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.648 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:31.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:31.648 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.648 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:31.905 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.905 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:31.905 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:32.162 NVMe0n1 00:27:32.163 13:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:32.724 00:27:32.724 13:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:32.724 13:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=764071 00:27:32.724 13:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:33.653 13:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:33.911 [2024-12-05 13:59:16.308719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.911 [2024-12-05 13:59:16.308990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.308996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 [2024-12-05 13:59:16.309168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595120 is same with the state(6) to be set 00:27:33.912 13:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:37.184 13:59:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:37.184 00:27:37.184 13:59:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:37.440 13:59:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:40.745 13:59:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.745 [2024-12-05 13:59:23.038133] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.745 13:59:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:41.675 13:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:41.931 13:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 764071 00:27:48.483 { 00:27:48.483 "results": [ 00:27:48.483 { 00:27:48.483 "job": "NVMe0n1", 00:27:48.483 "core_mask": "0x1", 00:27:48.483 "workload": "verify", 00:27:48.483 "status": "finished", 00:27:48.483 "verify_range": { 00:27:48.483 "start": 0, 00:27:48.483 "length": 16384 00:27:48.483 }, 00:27:48.483 "queue_depth": 128, 00:27:48.483 "io_size": 4096, 00:27:48.483 "runtime": 15.001967, 00:27:48.483 "iops": 11290.119488997676, 00:27:48.483 "mibps": 44.10202925389717, 00:27:48.483 "io_failed": 10549, 00:27:48.483 "io_timeout": 0, 00:27:48.483 "avg_latency_us": 10651.018451268705, 00:27:48.483 "min_latency_us": 384.24380952380955, 00:27:48.483 "max_latency_us": 12545.462857142857 00:27:48.483 } 00:27:48.483 ], 00:27:48.483 "core_count": 1 00:27:48.483 } 00:27:48.483 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 764019 00:27:48.483 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 764019 ']' 00:27:48.483 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 764019 00:27:48.483 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:48.483 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:48.483 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 764019 00:27:48.483 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:48.483 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:48.483 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 764019' 00:27:48.483 killing process with pid 764019 00:27:48.483 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 764019 00:27:48.483 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 764019 00:27:48.483 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:48.483 [2024-12-05 13:59:14.224707] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:27:48.483 [2024-12-05 13:59:14.224766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid764019 ] 00:27:48.483 [2024-12-05 13:59:14.300520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.483 [2024-12-05 13:59:14.341671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.483 Running I/O for 15 seconds... 00:27:48.483 11143.00 IOPS, 43.53 MiB/s [2024-12-05T12:59:31.070Z] [2024-12-05 13:59:16.309482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.483 [2024-12-05 13:59:16.309755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.483 [2024-12-05 13:59:16.309866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.483 [2024-12-05 13:59:16.309875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.309883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.309889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.309897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.309904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.309912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.309918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.309926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.309933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.309940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.309947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.309955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.309961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.309971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.309978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.309986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.309992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.484 [2024-12-05 13:59:16.310583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.484 [2024-12-05 13:59:16.310598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.484 [2024-12-05 13:59:16.310611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.484 [2024-12-05 13:59:16.310627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.484 [2024-12-05 13:59:16.310642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.484 [2024-12-05 13:59:16.310656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.484 [2024-12-05 13:59:16.310670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.484 [2024-12-05 13:59:16.310686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.484 [2024-12-05 13:59:16.310694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.484 [2024-12-05 13:59:16.310701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.310991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.310998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:16.311383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:48.485 [2024-12-05 13:59:16.311422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:48.485 [2024-12-05 13:59:16.311428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99552 len:8 PRP1 0x0 PRP2 0x0 00:27:48.485 [2024-12-05 13:59:16.311435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311479] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:48.485 [2024-12-05 13:59:16.311500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.485 [2024-12-05 13:59:16.311508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.485 [2024-12-05 13:59:16.311522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.485 [2024-12-05 13:59:16.311537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.485 [2024-12-05 13:59:16.311551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.485 [2024-12-05 13:59:16.311557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:48.485 [2024-12-05 13:59:16.314349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:48.485 [2024-12-05 13:59:16.314381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d39370 (9): Bad file descriptor 00:27:48.485 [2024-12-05 13:59:16.341495] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:27:48.485 11087.00 IOPS, 43.31 MiB/s [2024-12-05T12:59:31.072Z] 11167.33 IOPS, 43.62 MiB/s [2024-12-05T12:59:31.072Z] 11244.75 IOPS, 43.92 MiB/s [2024-12-05T12:59:31.072Z] [2024-12-05 13:59:19.830608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.485 [2024-12-05 13:59:19.830646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.486 [2024-12-05 13:59:19.830669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.486 [2024-12-05 13:59:19.830689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.486 [2024-12-05 13:59:19.830704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.486 [2024-12-05 13:59:19.830719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.486 [2024-12-05 13:59:19.830734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.486 [2024-12-05 13:59:19.830973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.830988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.830997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.486 [2024-12-05 13:59:19.831398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.486 [2024-12-05 13:59:19.831404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.831988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.831996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.487 [2024-12-05 13:59:19.832163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.487 [2024-12-05 13:59:19.832177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.487 [2024-12-05 13:59:19.832221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.487 [2024-12-05 13:59:19.832229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:19.832534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67ad0 is same with the state(6) to be set 00:27:48.488 [2024-12-05 13:59:19.832551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:48.488 [2024-12-05 13:59:19.832559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:48.488 [2024-12-05 13:59:19.832565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35240 len:8 PRP1 0x0 PRP2 0x0 00:27:48.488 [2024-12-05 13:59:19.832571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832613] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:48.488 [2024-12-05 13:59:19.832634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.488 [2024-12-05 13:59:19.832641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.488 [2024-12-05 13:59:19.832654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.488 [2024-12-05 13:59:19.832670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.488 [2024-12-05 13:59:19.832683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:19.832689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:27:48.488 [2024-12-05 13:59:19.835506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:27:48.488 [2024-12-05 13:59:19.835535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d39370 (9): Bad file descriptor 00:27:48.488 [2024-12-05 13:59:19.992049] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:27:48.488 10871.60 IOPS, 42.47 MiB/s [2024-12-05T12:59:31.075Z] 10983.17 IOPS, 42.90 MiB/s [2024-12-05T12:59:31.075Z] 11046.43 IOPS, 43.15 MiB/s [2024-12-05T12:59:31.075Z] 11098.50 IOPS, 43.35 MiB/s [2024-12-05T12:59:31.075Z] 11156.89 IOPS, 43.58 MiB/s [2024-12-05T12:59:31.075Z] [2024-12-05 13:59:24.259814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.488 [2024-12-05 13:59:24.259850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.259866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.488 [2024-12-05 13:59:24.259873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.259882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.488 [2024-12-05 13:59:24.259890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.259898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.488 [2024-12-05 13:59:24.259905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.259913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.488 [2024-12-05 13:59:24.259924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.259932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.488 [2024-12-05 13:59:24.259939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.259948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.259954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.259962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.259969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.259977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.259983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.259992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.259998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.488 [2024-12-05 13:59:24.260182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.488 [2024-12-05 13:59:24.260255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.488 [2024-12-05 13:59:24.260263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.260984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.260994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.261004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.261011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.261019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.261025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.261033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.261040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.261048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.261055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.261063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.261069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.489 [2024-12-05 13:59:24.261078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.489 [2024-12-05 13:59:24.261084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.490 [2024-12-05 13:59:24.261726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.490 [2024-12-05 13:59:24.261829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e942d0 is same with the state(6) to be set 00:27:48.490 [2024-12-05 13:59:24.261846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:48.490 [2024-12-05 13:59:24.261851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:48.490 [2024-12-05 13:59:24.261857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94024 len:8 PRP1 0x0 PRP2 0x0 00:27:48.490 [2024-12-05 13:59:24.261863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261908] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:48.490 [2024-12-05 13:59:24.261931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.490 [2024-12-05 13:59:24.261940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.490 [2024-12-05 13:59:24.261956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.490 [2024-12-05 13:59:24.261970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.490 [2024-12-05 13:59:24.261987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.490 [2024-12-05 13:59:24.261994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:48.490 [2024-12-05 13:59:24.264823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:48.490 [2024-12-05 13:59:24.264856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d39370 (9): Bad file descriptor 00:27:48.490 [2024-12-05 13:59:24.290272] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:27:48.491 11156.70 IOPS, 43.58 MiB/s [2024-12-05T12:59:31.078Z] 11202.91 IOPS, 43.76 MiB/s [2024-12-05T12:59:31.078Z] 11222.17 IOPS, 43.84 MiB/s [2024-12-05T12:59:31.078Z] 11244.38 IOPS, 43.92 MiB/s [2024-12-05T12:59:31.078Z] 11270.07 IOPS, 44.02 MiB/s 00:27:48.491 Latency(us) 00:27:48.491 [2024-12-05T12:59:31.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.491 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:48.491 Verification LBA range: start 0x0 length 0x4000 00:27:48.491 NVMe0n1 : 15.00 11290.12 44.10 703.17 0.00 10651.02 384.24 12545.46 00:27:48.491 [2024-12-05T12:59:31.078Z] =================================================================================================================== 00:27:48.491 [2024-12-05T12:59:31.078Z] Total : 11290.12 44.10 703.17 0.00 10651.02 384.24 12545.46 00:27:48.491 Received shutdown signal, test time was about 15.000000 seconds 00:27:48.491 00:27:48.491 Latency(us) 00:27:48.491 [2024-12-05T12:59:31.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.491 [2024-12-05T12:59:31.078Z] =================================================================================================================== 00:27:48.491 [2024-12-05T12:59:31.078Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=766591 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 766591 /var/tmp/bdevperf.sock 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 766591 ']' 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:48.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:48.491 [2024-12-05 13:59:30.917698] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:48.491 13:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:48.747 [2024-12-05 13:59:31.126320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:48.747 13:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:49.004 NVMe0n1 00:27:49.259 13:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:49.516 00:27:49.516 13:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:49.773 00:27:49.773 13:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:49.773 13:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:50.030 13:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:50.287 13:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:53.557 13:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:53.557 13:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:53.557 13:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:53.557 13:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=767511 00:27:53.557 13:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 767511 00:27:54.485 { 00:27:54.485 "results": [ 00:27:54.485 { 00:27:54.485 "job": "NVMe0n1", 00:27:54.485 "core_mask": "0x1", 00:27:54.485 "workload": "verify", 00:27:54.485 "status": "finished", 00:27:54.485 "verify_range": { 00:27:54.485 "start": 0, 00:27:54.486 "length": 16384 00:27:54.486 }, 00:27:54.486 "queue_depth": 128, 00:27:54.486 "io_size": 4096, 00:27:54.486 "runtime": 1.014017, 00:27:54.486 "iops": 11325.25391586137, 00:27:54.486 "mibps": 44.23927310883348, 00:27:54.486 "io_failed": 0, 00:27:54.486 "io_timeout": 0, 00:27:54.486 "avg_latency_us": 11257.815679288784, 00:27:54.486 "min_latency_us": 2324.967619047619, 00:27:54.486 "max_latency_us": 13606.521904761905 00:27:54.486 } 00:27:54.486 ], 00:27:54.486 "core_count": 1 00:27:54.486 } 00:27:54.486 13:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:54.486 [2024-12-05 13:59:30.534838] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:27:54.486 [2024-12-05 13:59:30.534888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766591 ] 00:27:54.486 [2024-12-05 13:59:30.611010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.486 [2024-12-05 13:59:30.649424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.486 [2024-12-05 13:59:32.643727] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:54.486 [2024-12-05 13:59:32.643772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.486 [2024-12-05 13:59:32.643783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.486 [2024-12-05 13:59:32.643792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.486 [2024-12-05 13:59:32.643799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.486 [2024-12-05 13:59:32.643807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.486 [2024-12-05 13:59:32.643813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.486 [2024-12-05 13:59:32.643820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:54.486 [2024-12-05 13:59:32.643827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.486 [2024-12-05 13:59:32.643834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:27:54.486 [2024-12-05 13:59:32.643859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:27:54.486 [2024-12-05 13:59:32.643872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0c370 (9): Bad file descriptor 00:27:54.486 [2024-12-05 13:59:32.777531] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:27:54.486 Running I/O for 1 seconds... 00:27:54.486 11244.00 IOPS, 43.92 MiB/s 00:27:54.486 Latency(us) 00:27:54.486 [2024-12-05T12:59:37.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.486 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:54.486 Verification LBA range: start 0x0 length 0x4000 00:27:54.486 NVMe0n1 : 1.01 11325.25 44.24 0.00 0.00 11257.82 2324.97 13606.52 00:27:54.486 [2024-12-05T12:59:37.073Z] =================================================================================================================== 00:27:54.486 [2024-12-05T12:59:37.073Z] Total : 11325.25 44.24 0.00 0.00 11257.82 2324.97 13606.52 00:27:54.486 13:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:54.486 13:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:54.741 13:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.995 13:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:54.995 13:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:55.251 13:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.251 13:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:58.521 13:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:58.521 13:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:58.521 13:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 766591 00:27:58.521 13:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 766591 ']' 00:27:58.521 13:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 766591 00:27:58.521 13:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:58.521 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:58.521 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 766591 00:27:58.521 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:58.521 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:58.521 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 766591' 00:27:58.521 killing process with pid 766591 00:27:58.521 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 766591 00:27:58.521 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 766591 00:27:58.778 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:58.778 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:59.035 rmmod nvme_tcp 00:27:59.035 rmmod nvme_fabrics 00:27:59.035 rmmod nvme_keyring 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 763587 ']' 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 763587 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 763587 ']' 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 763587 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 763587 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 763587' 00:27:59.035 killing process with pid 763587 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 763587 00:27:59.035 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 763587 00:27:59.294 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:59.294 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:59.294 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:59.294 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:59.294 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:27:59.294 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:59.294 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:27:59.294 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:59.294 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:59.294 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.294 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:59.294 13:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.830 13:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:01.830 00:28:01.830 real 0m37.437s 00:28:01.830 user 1m58.595s 00:28:01.830 sys 0m7.891s 00:28:01.830 13:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:01.830 13:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:01.830 ************************************ 00:28:01.830 END TEST nvmf_failover 00:28:01.830 ************************************ 00:28:01.830 13:59:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:01.830 13:59:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:01.830 13:59:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:01.830 13:59:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.830 ************************************ 00:28:01.830 START TEST nvmf_host_discovery 00:28:01.830 ************************************ 00:28:01.830 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:01.830 * Looking for test storage... 00:28:01.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:01.830 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:01.830 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:28:01.830 13:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:01.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:01.830 --rc genhtml_branch_coverage=1 00:28:01.830 --rc genhtml_function_coverage=1 00:28:01.830 --rc genhtml_legend=1 00:28:01.830 --rc geninfo_all_blocks=1 00:28:01.830 --rc geninfo_unexecuted_blocks=1 00:28:01.830 00:28:01.830 ' 00:28:01.830 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:01.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:01.830 --rc genhtml_branch_coverage=1 00:28:01.830 --rc genhtml_function_coverage=1 00:28:01.830 --rc genhtml_legend=1 00:28:01.830 --rc geninfo_all_blocks=1 00:28:01.831 --rc geninfo_unexecuted_blocks=1 00:28:01.831 00:28:01.831 ' 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:01.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:01.831 --rc genhtml_branch_coverage=1 00:28:01.831 --rc genhtml_function_coverage=1 00:28:01.831 --rc genhtml_legend=1 00:28:01.831 --rc geninfo_all_blocks=1 00:28:01.831 --rc geninfo_unexecuted_blocks=1 00:28:01.831 00:28:01.831 ' 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:01.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:01.831 --rc genhtml_branch_coverage=1 00:28:01.831 --rc genhtml_function_coverage=1 00:28:01.831 --rc genhtml_legend=1 00:28:01.831 --rc geninfo_all_blocks=1 00:28:01.831 --rc geninfo_unexecuted_blocks=1 00:28:01.831 00:28:01.831 ' 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:01.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:28:01.831 13:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:07.103 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:07.103 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:07.103 Found net devices under 0000:86:00.0: cvl_0_0 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.103 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:07.363 Found net devices under 0000:86:00.1: cvl_0_1 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.363 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:07.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:28:07.364 00:28:07.364 --- 10.0.0.2 ping statistics --- 00:28:07.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.364 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:28:07.364 00:28:07.364 --- 10.0.0.1 ping statistics --- 00:28:07.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.364 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:07.364 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=771962 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 771962 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 771962 ']' 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:07.622 13:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.622 [2024-12-05 13:59:50.045215] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:28:07.622 [2024-12-05 13:59:50.045264] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.622 [2024-12-05 13:59:50.124490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.622 [2024-12-05 13:59:50.163121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.622 [2024-12-05 13:59:50.163159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.622 [2024-12-05 13:59:50.163166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.623 [2024-12-05 13:59:50.163181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.623 [2024-12-05 13:59:50.163187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.623 [2024-12-05 13:59:50.163748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.880 [2024-12-05 13:59:50.308665] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.880 [2024-12-05 13:59:50.320859] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.880 null0 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.880 null1 00:28:07.880 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.881 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:07.881 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.881 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.881 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.881 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=771981 00:28:07.881 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:07.881 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 771981 /tmp/host.sock 00:28:07.881 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 771981 ']' 00:28:07.881 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:07.881 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:07.881 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:07.881 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:07.881 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:07.881 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:07.881 [2024-12-05 13:59:50.399167] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:28:07.881 [2024-12-05 13:59:50.399210] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771981 ] 00:28:08.138 [2024-12-05 13:59:50.472463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.138 [2024-12-05 13:59:50.514648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:08.138 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.139 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:08.139 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.139 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:08.139 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:08.139 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.139 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.396 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.396 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:28:08.396 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:08.396 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:08.396 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.396 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:08.396 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:08.396 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.396 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.396 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:08.396 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.397 [2024-12-05 13:59:50.942435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:08.397 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.654 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:08.654 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:28:08.654 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:08.654 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.654 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.654 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:08.654 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.654 13:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.654 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:28:08.655 13:59:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:09.219 [2024-12-05 13:59:51.676847] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:09.219 [2024-12-05 13:59:51.676866] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:09.219 [2024-12-05 13:59:51.676878] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:09.475 [2024-12-05 13:59:51.805256] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:09.475 [2024-12-05 13:59:51.907043] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:09.475 [2024-12-05 13:59:51.907822] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21baca0:1 started. 00:28:09.475 [2024-12-05 13:59:51.909231] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:09.475 [2024-12-05 13:59:51.909246] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:09.475 [2024-12-05 13:59:51.915525] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21baca0 was disconnected and freed. delete nvme_qpair. 00:28:09.732 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:09.732 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:09.732 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:09.732 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:09.732 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:09.732 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.732 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:09.732 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.732 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.733 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:09.990 [2024-12-05 13:59:52.339712] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21bae80:1 started. 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:09.990 [2024-12-05 13:59:52.346493] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21bae80 was disconnected and freed. delete nvme_qpair. 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.990 [2024-12-05 13:59:52.426396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:09.990 [2024-12-05 13:59:52.427410] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:09.990 [2024-12-05 13:59:52.427434] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:09.990 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:09.991 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.991 [2024-12-05 13:59:52.553799] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:10.248 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:10.248 13:59:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:10.505 [2024-12-05 13:59:52.854125] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:28:10.505 [2024-12-05 13:59:52.854160] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:10.505 [2024-12-05 13:59:52.854167] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:10.505 [2024-12-05 13:59:52.854172] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.070 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.328 [2024-12-05 13:59:53.682284] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:11.328 [2024-12-05 13:59:53.682306] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:11.328 [2024-12-05 13:59:53.691525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.328 [2024-12-05 13:59:53.691543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.328 [2024-12-05 13:59:53.691553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.328 [2024-12-05 13:59:53.691560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.328 [2024-12-05 13:59:53.691568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.328 [2024-12-05 13:59:53.691575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.328 [2024-12-05 13:59:53.691582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.328 [2024-12-05 13:59:53.691588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.328 [2024-12-05 13:59:53.691595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cde0 is same with the state(6) to be set 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:11.328 [2024-12-05 13:59:53.701539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218cde0 (9): Bad file descriptor 00:28:11.328 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.328 [2024-12-05 13:59:53.711574] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:11.329 [2024-12-05 13:59:53.711586] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:11.329 [2024-12-05 13:59:53.711593] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:11.329 [2024-12-05 13:59:53.711598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:11.329 [2024-12-05 13:59:53.711615] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:11.329 [2024-12-05 13:59:53.711854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.329 [2024-12-05 13:59:53.711869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218cde0 with addr=10.0.0.2, port=4420 00:28:11.329 [2024-12-05 13:59:53.711877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cde0 is same with the state(6) to be set 00:28:11.329 [2024-12-05 13:59:53.711890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218cde0 (9): Bad file descriptor 00:28:11.329 [2024-12-05 13:59:53.711900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:11.329 [2024-12-05 13:59:53.711907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:11.329 [2024-12-05 13:59:53.711914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:11.329 [2024-12-05 13:59:53.711922] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:11.329 [2024-12-05 13:59:53.711928] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:11.329 [2024-12-05 13:59:53.711932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:11.329 [2024-12-05 13:59:53.721645] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:11.329 [2024-12-05 13:59:53.721655] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:11.329 [2024-12-05 13:59:53.721660] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:11.329 [2024-12-05 13:59:53.721664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:11.329 [2024-12-05 13:59:53.721676] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:11.329 [2024-12-05 13:59:53.721835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.329 [2024-12-05 13:59:53.721847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218cde0 with addr=10.0.0.2, port=4420 00:28:11.329 [2024-12-05 13:59:53.721854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cde0 is same with the state(6) to be set 00:28:11.329 [2024-12-05 13:59:53.721869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218cde0 (9): Bad file descriptor 00:28:11.329 [2024-12-05 13:59:53.721879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:11.329 [2024-12-05 13:59:53.721886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:11.329 [2024-12-05 13:59:53.721893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:11.329 [2024-12-05 13:59:53.721900] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:11.329 [2024-12-05 13:59:53.721904] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:11.329 [2024-12-05 13:59:53.721908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:11.329 [2024-12-05 13:59:53.731707] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:11.329 [2024-12-05 13:59:53.731721] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:11.329 [2024-12-05 13:59:53.731725] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:11.329 [2024-12-05 13:59:53.731729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:11.329 [2024-12-05 13:59:53.731743] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:11.329 [2024-12-05 13:59:53.732011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.329 [2024-12-05 13:59:53.732024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218cde0 with addr=10.0.0.2, port=4420 00:28:11.329 [2024-12-05 13:59:53.732031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cde0 is same with the state(6) to be set 00:28:11.329 [2024-12-05 13:59:53.732043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218cde0 (9): Bad file descriptor 00:28:11.329 [2024-12-05 13:59:53.732052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:11.329 [2024-12-05 13:59:53.732058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:11.329 [2024-12-05 13:59:53.732066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:11.329 [2024-12-05 13:59:53.732071] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:11.329 [2024-12-05 13:59:53.732076] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:11.329 [2024-12-05 13:59:53.732080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:11.329 [2024-12-05 13:59:53.741774] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:11.329 [2024-12-05 13:59:53.741790] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:11.329 [2024-12-05 13:59:53.741794] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:11.329 [2024-12-05 13:59:53.741798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:11.329 [2024-12-05 13:59:53.741811] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:11.329 [2024-12-05 13:59:53.741983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.329 [2024-12-05 13:59:53.741996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218cde0 with addr=10.0.0.2, port=4420 00:28:11.329 [2024-12-05 13:59:53.742004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cde0 is same with the state(6) to be set 00:28:11.329 [2024-12-05 13:59:53.742014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218cde0 (9): Bad file descriptor 00:28:11.329 [2024-12-05 13:59:53.742025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:11.329 [2024-12-05 13:59:53.742032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:11.329 [2024-12-05 13:59:53.742041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:11.329 [2024-12-05 13:59:53.742048] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:11.329 [2024-12-05 13:59:53.742053] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:11.329 [2024-12-05 13:59:53.742058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:11.329 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:11.329 [2024-12-05 13:59:53.751841] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:11.329 [2024-12-05 13:59:53.751856] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:11.329 [2024-12-05 13:59:53.751861] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:11.329 [2024-12-05 13:59:53.751866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:11.329 [2024-12-05 13:59:53.751880] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:11.329 [2024-12-05 13:59:53.752040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.329 [2024-12-05 13:59:53.752052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218cde0 with addr=10.0.0.2, port=4420 00:28:11.329 [2024-12-05 13:59:53.752059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cde0 is same with the state(6) to be set 00:28:11.329 [2024-12-05 13:59:53.752070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218cde0 (9): Bad file descriptor 00:28:11.329 [2024-12-05 13:59:53.752079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:11.329 [2024-12-05 13:59:53.752085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:11.329 [2024-12-05 13:59:53.752096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:11.329 [2024-12-05 13:59:53.752102] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:11.329 [2024-12-05 13:59:53.752107] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:11.329 [2024-12-05 13:59:53.752111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:11.329 [2024-12-05 13:59:53.761910] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:11.329 [2024-12-05 13:59:53.761921] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:11.330 [2024-12-05 13:59:53.761925] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:11.330 [2024-12-05 13:59:53.761929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:11.330 [2024-12-05 13:59:53.761942] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:11.330 [2024-12-05 13:59:53.762192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.330 [2024-12-05 13:59:53.762204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218cde0 with addr=10.0.0.2, port=4420 00:28:11.330 [2024-12-05 13:59:53.762211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cde0 is same with the state(6) to be set 00:28:11.330 [2024-12-05 13:59:53.762222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218cde0 (9): Bad file descriptor 00:28:11.330 [2024-12-05 13:59:53.762232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:11.330 [2024-12-05 13:59:53.762238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:11.330 [2024-12-05 13:59:53.762245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:11.330 [2024-12-05 13:59:53.762250] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:11.330 [2024-12-05 13:59:53.762254] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:11.330 [2024-12-05 13:59:53.762258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:11.330 [2024-12-05 13:59:53.768558] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:11.330 [2024-12-05 13:59:53.768575] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.330 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:11.588 13:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.588 13:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.543 [2024-12-05 13:59:55.105840] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:12.543 [2024-12-05 13:59:55.105856] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:12.543 [2024-12-05 13:59:55.105868] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:12.887 [2024-12-05 13:59:55.192122] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:13.165 [2024-12-05 13:59:55.492403] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:28:13.165 [2024-12-05 13:59:55.493024] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x21a2410:1 started. 00:28:13.165 [2024-12-05 13:59:55.494733] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:13.165 [2024-12-05 13:59:55.494759] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.165 [2024-12-05 13:59:55.495922] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x21a2410 was disconnected and freed. delete nvme_qpair. 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.165 request: 00:28:13.165 { 00:28:13.165 "name": "nvme", 00:28:13.165 "trtype": "tcp", 00:28:13.165 "traddr": "10.0.0.2", 00:28:13.165 "adrfam": "ipv4", 00:28:13.165 "trsvcid": "8009", 00:28:13.165 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:13.165 "wait_for_attach": true, 00:28:13.165 "method": "bdev_nvme_start_discovery", 00:28:13.165 "req_id": 1 00:28:13.165 } 00:28:13.165 Got JSON-RPC error response 00:28:13.165 response: 00:28:13.165 { 00:28:13.165 "code": -17, 00:28:13.165 "message": "File exists" 00:28:13.165 } 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.165 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.165 request: 00:28:13.165 { 00:28:13.165 "name": "nvme_second", 00:28:13.165 "trtype": "tcp", 00:28:13.165 "traddr": "10.0.0.2", 00:28:13.165 "adrfam": "ipv4", 00:28:13.165 "trsvcid": "8009", 00:28:13.165 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:13.165 "wait_for_attach": true, 00:28:13.165 "method": "bdev_nvme_start_discovery", 00:28:13.165 "req_id": 1 00:28:13.165 } 00:28:13.165 Got JSON-RPC error response 00:28:13.165 response: 00:28:13.165 { 00:28:13.165 "code": -17, 00:28:13.165 "message": "File exists" 00:28:13.165 } 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.166 13:59:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.534 [2024-12-05 13:59:56.734080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.534 [2024-12-05 13:59:56.734105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f2ff0 with addr=10.0.0.2, port=8010 00:28:14.534 [2024-12-05 13:59:56.734117] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:14.534 [2024-12-05 13:59:56.734123] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:14.534 [2024-12-05 13:59:56.734129] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:15.463 [2024-12-05 13:59:57.736561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.463 [2024-12-05 13:59:57.736584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f2ff0 with addr=10.0.0.2, port=8010 00:28:15.463 [2024-12-05 13:59:57.736595] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:15.463 [2024-12-05 13:59:57.736601] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:15.463 [2024-12-05 13:59:57.736607] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:16.393 [2024-12-05 13:59:58.738793] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:16.393 request: 00:28:16.393 { 00:28:16.393 "name": "nvme_second", 00:28:16.393 "trtype": "tcp", 00:28:16.393 "traddr": "10.0.0.2", 00:28:16.393 "adrfam": "ipv4", 00:28:16.393 "trsvcid": "8010", 00:28:16.393 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:16.393 "wait_for_attach": false, 00:28:16.393 "attach_timeout_ms": 3000, 00:28:16.393 "method": "bdev_nvme_start_discovery", 00:28:16.393 "req_id": 1 00:28:16.393 } 00:28:16.393 Got JSON-RPC error response 00:28:16.393 response: 00:28:16.393 { 00:28:16.393 "code": -110, 00:28:16.393 "message": "Connection timed out" 00:28:16.393 } 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 771981 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:16.393 rmmod nvme_tcp 00:28:16.393 rmmod nvme_fabrics 00:28:16.393 rmmod nvme_keyring 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 771962 ']' 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 771962 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 771962 ']' 00:28:16.393 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 771962 00:28:16.394 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:28:16.394 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.394 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 771962 00:28:16.394 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:16.394 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:16.394 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 771962' 00:28:16.394 killing process with pid 771962 00:28:16.394 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 771962 00:28:16.394 13:59:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 771962 00:28:16.652 13:59:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:16.652 13:59:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:16.652 13:59:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:16.652 13:59:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:28:16.652 13:59:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:28:16.652 13:59:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:16.652 13:59:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:28:16.652 13:59:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.652 13:59:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.652 13:59:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.652 13:59:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.652 13:59:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.558 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:18.558 00:28:18.558 real 0m17.271s 00:28:18.558 user 0m20.718s 00:28:18.558 sys 0m5.772s 00:28:18.558 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:18.558 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.558 ************************************ 00:28:18.558 END TEST nvmf_host_discovery 00:28:18.558 ************************************ 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.816 ************************************ 00:28:18.816 START TEST nvmf_host_multipath_status 00:28:18.816 ************************************ 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:18.816 * Looking for test storage... 00:28:18.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:18.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.816 --rc genhtml_branch_coverage=1 00:28:18.816 --rc genhtml_function_coverage=1 00:28:18.816 --rc genhtml_legend=1 00:28:18.816 --rc geninfo_all_blocks=1 00:28:18.816 --rc geninfo_unexecuted_blocks=1 00:28:18.816 00:28:18.816 ' 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:18.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.816 --rc genhtml_branch_coverage=1 00:28:18.816 --rc genhtml_function_coverage=1 00:28:18.816 --rc genhtml_legend=1 00:28:18.816 --rc geninfo_all_blocks=1 00:28:18.816 --rc geninfo_unexecuted_blocks=1 00:28:18.816 00:28:18.816 ' 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:18.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.816 --rc genhtml_branch_coverage=1 00:28:18.816 --rc genhtml_function_coverage=1 00:28:18.816 --rc genhtml_legend=1 00:28:18.816 --rc geninfo_all_blocks=1 00:28:18.816 --rc geninfo_unexecuted_blocks=1 00:28:18.816 00:28:18.816 ' 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:18.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.816 --rc genhtml_branch_coverage=1 00:28:18.816 --rc genhtml_function_coverage=1 00:28:18.816 --rc genhtml_legend=1 00:28:18.816 --rc geninfo_all_blocks=1 00:28:18.816 --rc geninfo_unexecuted_blocks=1 00:28:18.816 00:28:18.816 ' 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.816 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.817 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.817 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.817 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:18.817 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:18.817 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.817 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.817 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.817 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:19.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:28:19.076 14:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:25.646 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:25.646 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:25.646 Found net devices under 0000:86:00.0: cvl_0_0 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:25.646 Found net devices under 0000:86:00.1: cvl_0_1 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:25.646 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:25.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:28:25.647 00:28:25.647 --- 10.0.0.2 ping statistics --- 00:28:25.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.647 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:28:25.647 00:28:25.647 --- 10.0.0.1 ping statistics --- 00:28:25.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.647 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=777192 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 777192 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 777192 ']' 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:25.647 [2024-12-05 14:00:07.424184] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:28:25.647 [2024-12-05 14:00:07.424229] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.647 [2024-12-05 14:00:07.502099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:25.647 [2024-12-05 14:00:07.543041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.647 [2024-12-05 14:00:07.543079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.647 [2024-12-05 14:00:07.543087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.647 [2024-12-05 14:00:07.543092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.647 [2024-12-05 14:00:07.543098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.647 [2024-12-05 14:00:07.544313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.647 [2024-12-05 14:00:07.544316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=777192 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:25.647 [2024-12-05 14:00:07.846063] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.647 14:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:25.647 Malloc0 00:28:25.647 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:25.904 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:26.162 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.162 [2024-12-05 14:00:08.675727] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.162 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:26.420 [2024-12-05 14:00:08.880257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:26.420 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=777632 00:28:26.420 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:26.420 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:26.420 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 777632 /var/tmp/bdevperf.sock 00:28:26.420 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 777632 ']' 00:28:26.420 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:26.420 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.420 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:26.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:26.420 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.420 14:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:26.679 14:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.679 14:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:26.679 14:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:26.936 14:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:27.193 Nvme0n1 00:28:27.193 14:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:27.756 Nvme0n1 00:28:27.756 14:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:27.756 14:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:29.650 14:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:29.650 14:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:29.907 14:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:30.164 14:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:31.121 14:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:31.121 14:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:31.121 14:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.121 14:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:31.377 14:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:31.377 14:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:31.377 14:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.377 14:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:31.633 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:31.633 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:31.633 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.633 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:31.888 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:31.888 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:31.888 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.888 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:32.144 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:32.144 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:32.144 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.144 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:32.144 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:32.144 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:32.145 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.145 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:32.452 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:32.452 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:32.452 14:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:32.708 14:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:32.964 14:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:33.892 14:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:33.892 14:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:33.892 14:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:33.892 14:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:34.148 14:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:34.148 14:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:34.148 14:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:34.148 14:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:34.405 14:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:34.405 14:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:34.405 14:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:34.405 14:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:34.663 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:34.663 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:34.663 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:34.663 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:34.663 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:34.663 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:34.663 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:34.663 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:34.921 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:34.921 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:34.921 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:34.921 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:35.178 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:35.178 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:35.178 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:35.436 14:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:35.692 14:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:36.623 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:36.623 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:36.623 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.623 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:36.880 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:36.880 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:36.880 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.880 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:37.138 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:37.138 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:37.138 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:37.138 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:37.138 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:37.138 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:37.138 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:37.138 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:37.395 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:37.395 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:37.395 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:37.395 14:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:37.651 14:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:37.651 14:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:37.651 14:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:37.651 14:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:37.908 14:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:37.908 14:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:37.908 14:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:38.165 14:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:38.165 14:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:39.535 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:39.535 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:39.535 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.535 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:39.535 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:39.535 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:39.535 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:39.535 14:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.793 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:39.793 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:39.793 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:39.793 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.793 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:39.793 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:39.793 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.793 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:40.050 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:40.050 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:40.050 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:40.050 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:40.307 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:40.307 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:40.307 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:40.307 14:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:40.563 14:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:40.563 14:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:40.563 14:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:40.820 14:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:41.076 14:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:42.007 14:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:42.007 14:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:42.007 14:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.007 14:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:42.264 14:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:42.264 14:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:42.264 14:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.264 14:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:42.264 14:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:42.264 14:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:42.264 14:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.264 14:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:42.521 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.521 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:42.521 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.521 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:42.777 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.777 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:42.777 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.777 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:43.032 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:43.032 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:43.032 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.032 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:43.033 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:43.033 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:43.033 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:43.288 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:43.544 14:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:44.475 14:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:44.475 14:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:44.475 14:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.475 14:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:44.733 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:44.733 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:44.733 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.733 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:44.990 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:44.990 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:44.990 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.990 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:45.246 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.246 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:45.246 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.246 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:45.503 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.503 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:45.503 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.503 14:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:45.503 14:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:45.503 14:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:45.503 14:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.503 14:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:45.760 14:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.760 14:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:46.016 14:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:46.016 14:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:46.273 14:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:46.530 14:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:47.461 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:47.461 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:47.461 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.461 14:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:47.719 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:47.719 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:47.719 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.719 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:47.976 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:47.976 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:47.976 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:47.976 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.233 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.233 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:48.233 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.233 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:48.233 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.233 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:48.233 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:48.233 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:48.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.490 14:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:48.746 14:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.746 14:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:48.746 14:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:49.001 14:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:49.257 14:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:50.207 14:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:50.207 14:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:50.207 14:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.207 14:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:50.465 14:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:50.465 14:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:50.465 14:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.465 14:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:50.723 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:50.723 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:50.723 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.723 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:50.979 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:50.979 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:50.979 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.979 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:50.979 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:50.979 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:50.979 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.979 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:51.234 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.234 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:51.234 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.234 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:51.489 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.489 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:51.489 14:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:51.744 14:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:52.000 14:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:52.930 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:52.930 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:52.930 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:52.930 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:53.186 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:53.186 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:53.186 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.186 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:53.186 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:53.186 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:53.186 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.186 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:53.443 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:53.443 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:53.443 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.443 14:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:53.700 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:53.700 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:53.700 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.700 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:53.956 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:53.956 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:53.956 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:53.956 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.213 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.213 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:54.213 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:54.469 14:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:54.469 14:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:55.836 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:55.836 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:55.836 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:55.836 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:55.836 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:55.836 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:55.836 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:55.836 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:56.093 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:56.093 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:56.093 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.093 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:56.093 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:56.093 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:56.093 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.093 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:56.349 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:56.349 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:56.349 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.349 14:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:56.606 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:56.606 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:56.606 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.606 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:56.864 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:56.864 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 777632 00:28:56.864 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 777632 ']' 00:28:56.864 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 777632 00:28:56.864 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:56.864 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.864 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 777632 00:28:56.864 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:56.864 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:56.864 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 777632' 00:28:56.864 killing process with pid 777632 00:28:56.864 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 777632 00:28:56.864 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 777632 00:28:56.864 { 00:28:56.864 "results": [ 00:28:56.864 { 00:28:56.864 "job": "Nvme0n1", 00:28:56.864 "core_mask": "0x4", 00:28:56.864 "workload": "verify", 00:28:56.864 "status": "terminated", 00:28:56.864 "verify_range": { 00:28:56.864 "start": 0, 00:28:56.864 "length": 16384 00:28:56.864 }, 00:28:56.864 "queue_depth": 128, 00:28:56.864 "io_size": 4096, 00:28:56.864 "runtime": 28.981956, 00:28:56.864 "iops": 10737.43952961629, 00:28:56.864 "mibps": 41.94312316256363, 00:28:56.864 "io_failed": 0, 00:28:56.864 "io_timeout": 0, 00:28:56.864 "avg_latency_us": 11901.447543246919, 00:28:56.864 "min_latency_us": 690.4685714285714, 00:28:56.864 "max_latency_us": 3019898.88 00:28:56.864 } 00:28:56.864 ], 00:28:56.864 "core_count": 1 00:28:56.864 } 00:28:57.129 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 777632 00:28:57.129 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:57.129 [2024-12-05 14:00:08.956040] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:28:57.129 [2024-12-05 14:00:08.956101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777632 ] 00:28:57.129 [2024-12-05 14:00:09.033988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.129 [2024-12-05 14:00:09.075846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.129 Running I/O for 90 seconds... 00:28:57.129 11785.00 IOPS, 46.04 MiB/s [2024-12-05T13:00:39.716Z] 11688.00 IOPS, 45.66 MiB/s [2024-12-05T13:00:39.716Z] 11706.00 IOPS, 45.73 MiB/s [2024-12-05T13:00:39.716Z] 11689.75 IOPS, 45.66 MiB/s [2024-12-05T13:00:39.716Z] 11676.60 IOPS, 45.61 MiB/s [2024-12-05T13:00:39.716Z] 11639.33 IOPS, 45.47 MiB/s [2024-12-05T13:00:39.716Z] 11600.14 IOPS, 45.31 MiB/s [2024-12-05T13:00:39.716Z] 11576.50 IOPS, 45.22 MiB/s [2024-12-05T13:00:39.716Z] 11579.22 IOPS, 45.23 MiB/s [2024-12-05T13:00:39.716Z] 11550.00 IOPS, 45.12 MiB/s [2024-12-05T13:00:39.716Z] 11555.73 IOPS, 45.14 MiB/s [2024-12-05T13:00:39.716Z] 11562.50 IOPS, 45.17 MiB/s [2024-12-05T13:00:39.716Z] [2024-12-05 14:00:23.190877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.190915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.190952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.190961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.190975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.190982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.190995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.129 [2024-12-05 14:00:23.191819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.191981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.191993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.192000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.192012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.192019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.192031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.192039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.192052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.192058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.192071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.192078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.192090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.129 [2024-12-05 14:00:23.192097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:57.129 [2024-12-05 14:00:23.192110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.192648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.192656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.193182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.193194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.193211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.193218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.193234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.193242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.193258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.193266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.193281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.193291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.193307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.193313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.193331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.193339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.193357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.193366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.193387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.193394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.193410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.193420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.193436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.193444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:57.130 [2024-12-05 14:00:23.193460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.130 [2024-12-05 14:00:23.193470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.193962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.193969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:57.131 [2024-12-05 14:00:23.194363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.131 [2024-12-05 14:00:23.194374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:23.194399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:23.194423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:23.194452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:23.194475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:23.194500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:23.194523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:23.194547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:23.194572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:23.194596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:23.194620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:23.194643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:23.194668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:23.194691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.132 [2024-12-05 14:00:23.194717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.132 [2024-12-05 14:00:23.194740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.132 [2024-12-05 14:00:23.194764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.132 [2024-12-05 14:00:23.194788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.132 [2024-12-05 14:00:23.194811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.132 [2024-12-05 14:00:23.194838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:23.194855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.132 [2024-12-05 14:00:23.194861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:57.132 11444.38 IOPS, 44.70 MiB/s [2024-12-05T13:00:39.719Z] 10626.93 IOPS, 41.51 MiB/s [2024-12-05T13:00:39.719Z] 9918.47 IOPS, 38.74 MiB/s [2024-12-05T13:00:39.719Z] 9395.88 IOPS, 36.70 MiB/s [2024-12-05T13:00:39.719Z] 9511.06 IOPS, 37.15 MiB/s [2024-12-05T13:00:39.719Z] 9615.67 IOPS, 37.56 MiB/s [2024-12-05T13:00:39.719Z] 9789.84 IOPS, 38.24 MiB/s [2024-12-05T13:00:39.719Z] 9982.70 IOPS, 38.99 MiB/s [2024-12-05T13:00:39.719Z] 10162.38 IOPS, 39.70 MiB/s [2024-12-05T13:00:39.719Z] 10237.82 IOPS, 39.99 MiB/s [2024-12-05T13:00:39.719Z] 10295.65 IOPS, 40.22 MiB/s [2024-12-05T13:00:39.719Z] 10341.92 IOPS, 40.40 MiB/s [2024-12-05T13:00:39.719Z] 10472.76 IOPS, 40.91 MiB/s [2024-12-05T13:00:39.719Z] 10593.73 IOPS, 41.38 MiB/s [2024-12-05T13:00:39.719Z] [2024-12-05 14:00:37.024994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.025034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:37.025068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.025076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:37.027187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.027210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:37.027227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.027242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:37.027255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.027262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:37.027274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.027282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:37.027294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.027301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:37.027314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.027322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:37.027335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.027342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:37.027354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.027362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:37.027379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.027388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:37.027401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.027408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:37.027420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.027428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:57.132 [2024-12-05 14:00:37.027441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.132 [2024-12-05 14:00:37.027448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.027460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.027467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.027480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.027487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.027502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.027509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.027521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.027528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.027541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.027549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:57.133 [2024-12-05 14:00:37.028587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.133 [2024-12-05 14:00:37.028595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:57.133 10692.74 IOPS, 41.77 MiB/s [2024-12-05T13:00:39.720Z] 10717.07 IOPS, 41.86 MiB/s [2024-12-05T13:00:39.720Z] Received shutdown signal, test time was about 28.982593 seconds 00:28:57.133 00:28:57.133 Latency(us) 00:28:57.133 [2024-12-05T13:00:39.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.133 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:57.133 Verification LBA range: start 0x0 length 0x4000 00:28:57.133 Nvme0n1 : 28.98 10737.44 41.94 0.00 0.00 11901.45 690.47 3019898.88 00:28:57.133 [2024-12-05T13:00:39.720Z] =================================================================================================================== 00:28:57.133 [2024-12-05T13:00:39.720Z] Total : 10737.44 41.94 0.00 0.00 11901.45 690.47 3019898.88 00:28:57.133 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:57.530 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:57.530 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:57.530 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:57.530 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.531 rmmod nvme_tcp 00:28:57.531 rmmod nvme_fabrics 00:28:57.531 rmmod nvme_keyring 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 777192 ']' 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 777192 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 777192 ']' 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 777192 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 777192 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 777192' 00:28:57.531 killing process with pid 777192 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 777192 00:28:57.531 14:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 777192 00:28:57.531 14:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.531 14:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.531 14:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.531 14:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:57.531 14:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:28:57.531 14:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.531 14:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.531 14:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.531 14:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.531 14:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.531 14:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.531 14:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:00.069 00:29:00.069 real 0m40.880s 00:29:00.069 user 1m50.884s 00:29:00.069 sys 0m11.688s 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:00.069 ************************************ 00:29:00.069 END TEST nvmf_host_multipath_status 00:29:00.069 ************************************ 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.069 ************************************ 00:29:00.069 START TEST nvmf_discovery_remove_ifc 00:29:00.069 ************************************ 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:00.069 * Looking for test storage... 00:29:00.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:29:00.069 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:00.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.070 --rc genhtml_branch_coverage=1 00:29:00.070 --rc genhtml_function_coverage=1 00:29:00.070 --rc genhtml_legend=1 00:29:00.070 --rc geninfo_all_blocks=1 00:29:00.070 --rc geninfo_unexecuted_blocks=1 00:29:00.070 00:29:00.070 ' 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:00.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.070 --rc genhtml_branch_coverage=1 00:29:00.070 --rc genhtml_function_coverage=1 00:29:00.070 --rc genhtml_legend=1 00:29:00.070 --rc geninfo_all_blocks=1 00:29:00.070 --rc geninfo_unexecuted_blocks=1 00:29:00.070 00:29:00.070 ' 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:00.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.070 --rc genhtml_branch_coverage=1 00:29:00.070 --rc genhtml_function_coverage=1 00:29:00.070 --rc genhtml_legend=1 00:29:00.070 --rc geninfo_all_blocks=1 00:29:00.070 --rc geninfo_unexecuted_blocks=1 00:29:00.070 00:29:00.070 ' 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:00.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.070 --rc genhtml_branch_coverage=1 00:29:00.070 --rc genhtml_function_coverage=1 00:29:00.070 --rc genhtml_legend=1 00:29:00.070 --rc geninfo_all_blocks=1 00:29:00.070 --rc geninfo_unexecuted_blocks=1 00:29:00.070 00:29:00.070 ' 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:00.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.070 14:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:06.657 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.657 14:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:06.657 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:06.657 Found net devices under 0000:86:00.0: cvl_0_0 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:06.657 Found net devices under 0000:86:00.1: cvl_0_1 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:29:06.657 00:29:06.657 --- 10.0.0.2 ping statistics --- 00:29:06.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.657 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:29:06.657 00:29:06.657 --- 10.0.0.1 ping statistics --- 00:29:06.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.657 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:29:06.657 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=786493 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 786493 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 786493 ']' 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:06.658 [2024-12-05 14:00:48.339112] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:29:06.658 [2024-12-05 14:00:48.339160] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.658 [2024-12-05 14:00:48.418182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.658 [2024-12-05 14:00:48.458080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.658 [2024-12-05 14:00:48.458114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.658 [2024-12-05 14:00:48.458121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.658 [2024-12-05 14:00:48.458127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.658 [2024-12-05 14:00:48.458132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.658 [2024-12-05 14:00:48.458711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:06.658 [2024-12-05 14:00:48.602171] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.658 [2024-12-05 14:00:48.610331] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:06.658 null0 00:29:06.658 [2024-12-05 14:00:48.642332] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=786612 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 786612 /tmp/host.sock 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 786612 ']' 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:06.658 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:06.658 [2024-12-05 14:00:48.710932] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:29:06.658 [2024-12-05 14:00:48.710973] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786612 ] 00:29:06.658 [2024-12-05 14:00:48.782555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.658 [2024-12-05 14:00:48.822817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.658 14:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:07.591 [2024-12-05 14:00:50.019468] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:07.591 [2024-12-05 14:00:50.019488] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:07.591 [2024-12-05 14:00:50.019507] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:07.591 [2024-12-05 14:00:50.105770] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:07.848 [2024-12-05 14:00:50.281829] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:07.848 [2024-12-05 14:00:50.282623] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa54850:1 started. 00:29:07.848 [2024-12-05 14:00:50.283966] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:07.848 [2024-12-05 14:00:50.284008] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:07.848 [2024-12-05 14:00:50.284028] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:07.848 [2024-12-05 14:00:50.284040] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:07.848 [2024-12-05 14:00:50.284060] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:07.848 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.848 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:07.848 [2024-12-05 14:00:50.287772] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa54850 was disconnected and freed. delete nvme_qpair. 00:29:07.848 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:07.848 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:07.848 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:07.848 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.848 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:07.848 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:07.849 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:07.849 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.849 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:07.849 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:07.849 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:08.151 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:08.151 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:08.151 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:08.151 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:08.151 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.152 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:08.152 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:08.152 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:08.152 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.152 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:08.152 14:00:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:09.083 14:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:09.083 14:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:09.083 14:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:09.083 14:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.083 14:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:09.083 14:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:09.083 14:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:09.083 14:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.083 14:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:09.083 14:00:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:10.015 14:00:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:10.015 14:00:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:10.015 14:00:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:10.015 14:00:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.015 14:00:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:10.015 14:00:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:10.015 14:00:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:10.015 14:00:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.015 14:00:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:10.015 14:00:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:11.390 14:00:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:11.390 14:00:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:11.390 14:00:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:11.390 14:00:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.390 14:00:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:11.390 14:00:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:11.390 14:00:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:11.390 14:00:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.390 14:00:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:11.390 14:00:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:12.326 14:00:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:12.326 14:00:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:12.326 14:00:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:12.326 14:00:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.326 14:00:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:12.326 14:00:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:12.326 14:00:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:12.326 14:00:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.326 14:00:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:12.326 14:00:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:13.260 14:00:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:13.260 14:00:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:13.260 14:00:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:13.260 14:00:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.260 14:00:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:13.260 14:00:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:13.261 14:00:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:13.261 14:00:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.261 [2024-12-05 14:00:55.725473] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:13.261 [2024-12-05 14:00:55.725507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.261 [2024-12-05 14:00:55.725518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.261 [2024-12-05 14:00:55.725526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.261 [2024-12-05 14:00:55.725533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.261 [2024-12-05 14:00:55.725541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.261 [2024-12-05 14:00:55.725547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.261 [2024-12-05 14:00:55.725554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.261 [2024-12-05 14:00:55.725561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.261 [2024-12-05 14:00:55.725567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.261 [2024-12-05 14:00:55.725574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.261 [2024-12-05 14:00:55.725580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa31070 is same with the state(6) to be set 00:29:13.261 [2024-12-05 14:00:55.735495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa31070 (9): Bad file descriptor 00:29:13.261 14:00:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:13.261 14:00:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:13.261 [2024-12-05 14:00:55.745532] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:13.261 [2024-12-05 14:00:55.745544] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:13.261 [2024-12-05 14:00:55.745551] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:13.261 [2024-12-05 14:00:55.745555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:13.261 [2024-12-05 14:00:55.745576] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:14.196 14:00:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:14.196 14:00:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:14.196 14:00:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:14.196 14:00:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.196 14:00:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:14.196 14:00:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:14.196 14:00:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:14.196 [2024-12-05 14:00:56.758393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:14.196 [2024-12-05 14:00:56.758464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa31070 with addr=10.0.0.2, port=4420 00:29:14.196 [2024-12-05 14:00:56.758495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa31070 is same with the state(6) to be set 00:29:14.196 [2024-12-05 14:00:56.758552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa31070 (9): Bad file descriptor 00:29:14.196 [2024-12-05 14:00:56.759504] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:29:14.196 [2024-12-05 14:00:56.759567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:14.196 [2024-12-05 14:00:56.759593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:14.196 [2024-12-05 14:00:56.759617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:14.196 [2024-12-05 14:00:56.759639] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:14.196 [2024-12-05 14:00:56.759655] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:14.196 [2024-12-05 14:00:56.759669] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:14.196 [2024-12-05 14:00:56.759692] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:14.196 [2024-12-05 14:00:56.759708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:14.196 14:00:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.455 14:00:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:14.455 14:00:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:15.391 [2024-12-05 14:00:57.762223] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:15.391 [2024-12-05 14:00:57.762242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:15.391 [2024-12-05 14:00:57.762253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:15.391 [2024-12-05 14:00:57.762260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:15.391 [2024-12-05 14:00:57.762267] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:29:15.391 [2024-12-05 14:00:57.762273] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:15.391 [2024-12-05 14:00:57.762278] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:15.391 [2024-12-05 14:00:57.762282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:15.391 [2024-12-05 14:00:57.762300] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:15.391 [2024-12-05 14:00:57.762318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.391 [2024-12-05 14:00:57.762326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.391 [2024-12-05 14:00:57.762335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.391 [2024-12-05 14:00:57.762342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.391 [2024-12-05 14:00:57.762349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.391 [2024-12-05 14:00:57.762356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.391 [2024-12-05 14:00:57.762371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.391 [2024-12-05 14:00:57.762378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.391 [2024-12-05 14:00:57.762386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.391 [2024-12-05 14:00:57.762392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.391 [2024-12-05 14:00:57.762398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:29:15.391 [2024-12-05 14:00:57.762678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa20760 (9): Bad file descriptor 00:29:15.391 [2024-12-05 14:00:57.763687] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:15.391 [2024-12-05 14:00:57.763698] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:15.391 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.650 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:15.650 14:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:16.586 14:00:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:16.586 14:00:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:16.586 14:00:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:16.586 14:00:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.586 14:00:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:16.586 14:00:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:16.586 14:00:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:16.586 14:00:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.586 14:00:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:16.586 14:00:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:17.521 [2024-12-05 14:00:59.813908] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:17.522 [2024-12-05 14:00:59.813926] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:17.522 [2024-12-05 14:00:59.813937] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:17.522 [2024-12-05 14:00:59.900195] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:17.522 [2024-12-05 14:00:59.994852] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:29:17.522 [2024-12-05 14:00:59.995479] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xa3b860:1 started. 00:29:17.522 [2024-12-05 14:00:59.996497] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:17.522 [2024-12-05 14:00:59.996528] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:17.522 [2024-12-05 14:00:59.996545] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:17.522 [2024-12-05 14:00:59.996557] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:17.522 [2024-12-05 14:00:59.996565] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:17.522 [2024-12-05 14:01:00.042953] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xa3b860 was disconnected and freed. delete nvme_qpair. 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 786612 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 786612 ']' 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 786612 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.522 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786612 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786612' 00:29:17.782 killing process with pid 786612 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 786612 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 786612 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:17.782 rmmod nvme_tcp 00:29:17.782 rmmod nvme_fabrics 00:29:17.782 rmmod nvme_keyring 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 786493 ']' 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 786493 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 786493 ']' 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 786493 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.782 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786493 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786493' 00:29:18.042 killing process with pid 786493 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 786493 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 786493 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.042 14:01:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.579 14:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:20.579 00:29:20.579 real 0m20.476s 00:29:20.579 user 0m24.789s 00:29:20.579 sys 0m5.807s 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:20.580 ************************************ 00:29:20.580 END TEST nvmf_discovery_remove_ifc 00:29:20.580 ************************************ 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.580 ************************************ 00:29:20.580 START TEST nvmf_identify_kernel_target 00:29:20.580 ************************************ 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:20.580 * Looking for test storage... 00:29:20.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:20.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.580 --rc genhtml_branch_coverage=1 00:29:20.580 --rc genhtml_function_coverage=1 00:29:20.580 --rc genhtml_legend=1 00:29:20.580 --rc geninfo_all_blocks=1 00:29:20.580 --rc geninfo_unexecuted_blocks=1 00:29:20.580 00:29:20.580 ' 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:20.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.580 --rc genhtml_branch_coverage=1 00:29:20.580 --rc genhtml_function_coverage=1 00:29:20.580 --rc genhtml_legend=1 00:29:20.580 --rc geninfo_all_blocks=1 00:29:20.580 --rc geninfo_unexecuted_blocks=1 00:29:20.580 00:29:20.580 ' 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:20.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.580 --rc genhtml_branch_coverage=1 00:29:20.580 --rc genhtml_function_coverage=1 00:29:20.580 --rc genhtml_legend=1 00:29:20.580 --rc geninfo_all_blocks=1 00:29:20.580 --rc geninfo_unexecuted_blocks=1 00:29:20.580 00:29:20.580 ' 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:20.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.580 --rc genhtml_branch_coverage=1 00:29:20.580 --rc genhtml_function_coverage=1 00:29:20.580 --rc genhtml_legend=1 00:29:20.580 --rc geninfo_all_blocks=1 00:29:20.580 --rc geninfo_unexecuted_blocks=1 00:29:20.580 00:29:20.580 ' 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.580 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:20.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:20.581 14:01:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:27.148 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:27.149 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:27.149 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:27.149 Found net devices under 0000:86:00.0: cvl_0_0 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:27.149 Found net devices under 0000:86:00.1: cvl_0_1 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:27.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:29:27.149 00:29:27.149 --- 10.0.0.2 ping statistics --- 00:29:27.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.149 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:29:27.149 00:29:27.149 --- 10.0.0.1 ping statistics --- 00:29:27.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.149 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:27.149 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:27.150 14:01:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:29.057 Waiting for block devices as requested 00:29:29.057 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:29.316 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:29.316 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:29.575 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:29.575 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:29.575 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:29.575 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:29.834 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:29.834 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:29.834 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:30.093 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:30.093 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:30.093 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:30.093 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:30.352 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:30.352 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:30.352 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:30.611 14:01:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:30.611 14:01:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:30.612 14:01:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:30.612 14:01:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:30.612 14:01:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:30.612 14:01:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:30.612 14:01:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:30.612 14:01:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:30.612 14:01:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:30.612 No valid GPT data, bailing 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:30.612 00:29:30.612 Discovery Log Number of Records 2, Generation counter 2 00:29:30.612 =====Discovery Log Entry 0====== 00:29:30.612 trtype: tcp 00:29:30.612 adrfam: ipv4 00:29:30.612 subtype: current discovery subsystem 00:29:30.612 treq: not specified, sq flow control disable supported 00:29:30.612 portid: 1 00:29:30.612 trsvcid: 4420 00:29:30.612 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:30.612 traddr: 10.0.0.1 00:29:30.612 eflags: none 00:29:30.612 sectype: none 00:29:30.612 =====Discovery Log Entry 1====== 00:29:30.612 trtype: tcp 00:29:30.612 adrfam: ipv4 00:29:30.612 subtype: nvme subsystem 00:29:30.612 treq: not specified, sq flow control disable supported 00:29:30.612 portid: 1 00:29:30.612 trsvcid: 4420 00:29:30.612 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:30.612 traddr: 10.0.0.1 00:29:30.612 eflags: none 00:29:30.612 sectype: none 00:29:30.612 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:30.612 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:30.879 ===================================================== 00:29:30.879 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:30.879 ===================================================== 00:29:30.879 Controller Capabilities/Features 00:29:30.879 ================================ 00:29:30.879 Vendor ID: 0000 00:29:30.879 Subsystem Vendor ID: 0000 00:29:30.879 Serial Number: ebd7d7bcc0f5d4b8428a 00:29:30.879 Model Number: Linux 00:29:30.879 Firmware Version: 6.8.9-20 00:29:30.879 Recommended Arb Burst: 0 00:29:30.879 IEEE OUI Identifier: 00 00 00 00:29:30.879 Multi-path I/O 00:29:30.879 May have multiple subsystem ports: No 00:29:30.879 May have multiple controllers: No 00:29:30.879 Associated with SR-IOV VF: No 00:29:30.879 Max Data Transfer Size: Unlimited 00:29:30.879 Max Number of Namespaces: 0 00:29:30.879 Max Number of I/O Queues: 1024 00:29:30.879 NVMe Specification Version (VS): 1.3 00:29:30.879 NVMe Specification Version (Identify): 1.3 00:29:30.879 Maximum Queue Entries: 1024 00:29:30.879 Contiguous Queues Required: No 00:29:30.879 Arbitration Mechanisms Supported 00:29:30.879 Weighted Round Robin: Not Supported 00:29:30.879 Vendor Specific: Not Supported 00:29:30.879 Reset Timeout: 7500 ms 00:29:30.879 Doorbell Stride: 4 bytes 00:29:30.879 NVM Subsystem Reset: Not Supported 00:29:30.879 Command Sets Supported 00:29:30.879 NVM Command Set: Supported 00:29:30.879 Boot Partition: Not Supported 00:29:30.879 Memory Page Size Minimum: 4096 bytes 00:29:30.879 Memory Page Size Maximum: 4096 bytes 00:29:30.879 Persistent Memory Region: Not Supported 00:29:30.879 Optional Asynchronous Events Supported 00:29:30.879 Namespace Attribute Notices: Not Supported 00:29:30.879 Firmware Activation Notices: Not Supported 00:29:30.879 ANA Change Notices: Not Supported 00:29:30.879 PLE Aggregate Log Change Notices: Not Supported 00:29:30.879 LBA Status Info Alert Notices: Not Supported 00:29:30.879 EGE Aggregate Log Change Notices: Not Supported 00:29:30.879 Normal NVM Subsystem Shutdown event: Not Supported 00:29:30.879 Zone Descriptor Change Notices: Not Supported 00:29:30.879 Discovery Log Change Notices: Supported 00:29:30.879 Controller Attributes 00:29:30.879 128-bit Host Identifier: Not Supported 00:29:30.879 Non-Operational Permissive Mode: Not Supported 00:29:30.879 NVM Sets: Not Supported 00:29:30.879 Read Recovery Levels: Not Supported 00:29:30.879 Endurance Groups: Not Supported 00:29:30.879 Predictable Latency Mode: Not Supported 00:29:30.879 Traffic Based Keep ALive: Not Supported 00:29:30.879 Namespace Granularity: Not Supported 00:29:30.879 SQ Associations: Not Supported 00:29:30.879 UUID List: Not Supported 00:29:30.879 Multi-Domain Subsystem: Not Supported 00:29:30.879 Fixed Capacity Management: Not Supported 00:29:30.880 Variable Capacity Management: Not Supported 00:29:30.880 Delete Endurance Group: Not Supported 00:29:30.880 Delete NVM Set: Not Supported 00:29:30.880 Extended LBA Formats Supported: Not Supported 00:29:30.880 Flexible Data Placement Supported: Not Supported 00:29:30.880 00:29:30.880 Controller Memory Buffer Support 00:29:30.880 ================================ 00:29:30.880 Supported: No 00:29:30.880 00:29:30.880 Persistent Memory Region Support 00:29:30.880 ================================ 00:29:30.880 Supported: No 00:29:30.880 00:29:30.880 Admin Command Set Attributes 00:29:30.880 ============================ 00:29:30.880 Security Send/Receive: Not Supported 00:29:30.880 Format NVM: Not Supported 00:29:30.880 Firmware Activate/Download: Not Supported 00:29:30.880 Namespace Management: Not Supported 00:29:30.880 Device Self-Test: Not Supported 00:29:30.880 Directives: Not Supported 00:29:30.880 NVMe-MI: Not Supported 00:29:30.880 Virtualization Management: Not Supported 00:29:30.880 Doorbell Buffer Config: Not Supported 00:29:30.880 Get LBA Status Capability: Not Supported 00:29:30.880 Command & Feature Lockdown Capability: Not Supported 00:29:30.880 Abort Command Limit: 1 00:29:30.880 Async Event Request Limit: 1 00:29:30.880 Number of Firmware Slots: N/A 00:29:30.880 Firmware Slot 1 Read-Only: N/A 00:29:30.880 Firmware Activation Without Reset: N/A 00:29:30.880 Multiple Update Detection Support: N/A 00:29:30.880 Firmware Update Granularity: No Information Provided 00:29:30.880 Per-Namespace SMART Log: No 00:29:30.880 Asymmetric Namespace Access Log Page: Not Supported 00:29:30.880 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:30.880 Command Effects Log Page: Not Supported 00:29:30.880 Get Log Page Extended Data: Supported 00:29:30.880 Telemetry Log Pages: Not Supported 00:29:30.880 Persistent Event Log Pages: Not Supported 00:29:30.880 Supported Log Pages Log Page: May Support 00:29:30.880 Commands Supported & Effects Log Page: Not Supported 00:29:30.880 Feature Identifiers & Effects Log Page:May Support 00:29:30.880 NVMe-MI Commands & Effects Log Page: May Support 00:29:30.880 Data Area 4 for Telemetry Log: Not Supported 00:29:30.880 Error Log Page Entries Supported: 1 00:29:30.880 Keep Alive: Not Supported 00:29:30.880 00:29:30.880 NVM Command Set Attributes 00:29:30.880 ========================== 00:29:30.880 Submission Queue Entry Size 00:29:30.880 Max: 1 00:29:30.880 Min: 1 00:29:30.880 Completion Queue Entry Size 00:29:30.880 Max: 1 00:29:30.880 Min: 1 00:29:30.880 Number of Namespaces: 0 00:29:30.880 Compare Command: Not Supported 00:29:30.880 Write Uncorrectable Command: Not Supported 00:29:30.880 Dataset Management Command: Not Supported 00:29:30.880 Write Zeroes Command: Not Supported 00:29:30.880 Set Features Save Field: Not Supported 00:29:30.880 Reservations: Not Supported 00:29:30.880 Timestamp: Not Supported 00:29:30.880 Copy: Not Supported 00:29:30.880 Volatile Write Cache: Not Present 00:29:30.880 Atomic Write Unit (Normal): 1 00:29:30.880 Atomic Write Unit (PFail): 1 00:29:30.880 Atomic Compare & Write Unit: 1 00:29:30.880 Fused Compare & Write: Not Supported 00:29:30.880 Scatter-Gather List 00:29:30.880 SGL Command Set: Supported 00:29:30.880 SGL Keyed: Not Supported 00:29:30.880 SGL Bit Bucket Descriptor: Not Supported 00:29:30.880 SGL Metadata Pointer: Not Supported 00:29:30.880 Oversized SGL: Not Supported 00:29:30.880 SGL Metadata Address: Not Supported 00:29:30.880 SGL Offset: Supported 00:29:30.880 Transport SGL Data Block: Not Supported 00:29:30.880 Replay Protected Memory Block: Not Supported 00:29:30.880 00:29:30.880 Firmware Slot Information 00:29:30.880 ========================= 00:29:30.880 Active slot: 0 00:29:30.880 00:29:30.880 00:29:30.880 Error Log 00:29:30.880 ========= 00:29:30.880 00:29:30.880 Active Namespaces 00:29:30.880 ================= 00:29:30.880 Discovery Log Page 00:29:30.880 ================== 00:29:30.880 Generation Counter: 2 00:29:30.880 Number of Records: 2 00:29:30.880 Record Format: 0 00:29:30.880 00:29:30.880 Discovery Log Entry 0 00:29:30.880 ---------------------- 00:29:30.880 Transport Type: 3 (TCP) 00:29:30.880 Address Family: 1 (IPv4) 00:29:30.880 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:30.880 Entry Flags: 00:29:30.880 Duplicate Returned Information: 0 00:29:30.880 Explicit Persistent Connection Support for Discovery: 0 00:29:30.880 Transport Requirements: 00:29:30.880 Secure Channel: Not Specified 00:29:30.880 Port ID: 1 (0x0001) 00:29:30.880 Controller ID: 65535 (0xffff) 00:29:30.880 Admin Max SQ Size: 32 00:29:30.880 Transport Service Identifier: 4420 00:29:30.880 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:30.880 Transport Address: 10.0.0.1 00:29:30.880 Discovery Log Entry 1 00:29:30.880 ---------------------- 00:29:30.880 Transport Type: 3 (TCP) 00:29:30.880 Address Family: 1 (IPv4) 00:29:30.880 Subsystem Type: 2 (NVM Subsystem) 00:29:30.880 Entry Flags: 00:29:30.880 Duplicate Returned Information: 0 00:29:30.880 Explicit Persistent Connection Support for Discovery: 0 00:29:30.880 Transport Requirements: 00:29:30.880 Secure Channel: Not Specified 00:29:30.880 Port ID: 1 (0x0001) 00:29:30.880 Controller ID: 65535 (0xffff) 00:29:30.880 Admin Max SQ Size: 32 00:29:30.880 Transport Service Identifier: 4420 00:29:30.881 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:30.881 Transport Address: 10.0.0.1 00:29:30.881 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:30.881 get_feature(0x01) failed 00:29:30.881 get_feature(0x02) failed 00:29:30.881 get_feature(0x04) failed 00:29:30.881 ===================================================== 00:29:30.881 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:30.881 ===================================================== 00:29:30.881 Controller Capabilities/Features 00:29:30.881 ================================ 00:29:30.881 Vendor ID: 0000 00:29:30.881 Subsystem Vendor ID: 0000 00:29:30.881 Serial Number: 44b14bc22911981e5348 00:29:30.881 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:30.881 Firmware Version: 6.8.9-20 00:29:30.881 Recommended Arb Burst: 6 00:29:30.881 IEEE OUI Identifier: 00 00 00 00:29:30.881 Multi-path I/O 00:29:30.881 May have multiple subsystem ports: Yes 00:29:30.881 May have multiple controllers: Yes 00:29:30.881 Associated with SR-IOV VF: No 00:29:30.881 Max Data Transfer Size: Unlimited 00:29:30.881 Max Number of Namespaces: 1024 00:29:30.881 Max Number of I/O Queues: 128 00:29:30.881 NVMe Specification Version (VS): 1.3 00:29:30.881 NVMe Specification Version (Identify): 1.3 00:29:30.881 Maximum Queue Entries: 1024 00:29:30.881 Contiguous Queues Required: No 00:29:30.881 Arbitration Mechanisms Supported 00:29:30.881 Weighted Round Robin: Not Supported 00:29:30.881 Vendor Specific: Not Supported 00:29:30.881 Reset Timeout: 7500 ms 00:29:30.881 Doorbell Stride: 4 bytes 00:29:30.881 NVM Subsystem Reset: Not Supported 00:29:30.881 Command Sets Supported 00:29:30.881 NVM Command Set: Supported 00:29:30.881 Boot Partition: Not Supported 00:29:30.881 Memory Page Size Minimum: 4096 bytes 00:29:30.881 Memory Page Size Maximum: 4096 bytes 00:29:30.881 Persistent Memory Region: Not Supported 00:29:30.881 Optional Asynchronous Events Supported 00:29:30.881 Namespace Attribute Notices: Supported 00:29:30.881 Firmware Activation Notices: Not Supported 00:29:30.881 ANA Change Notices: Supported 00:29:30.881 PLE Aggregate Log Change Notices: Not Supported 00:29:30.881 LBA Status Info Alert Notices: Not Supported 00:29:30.881 EGE Aggregate Log Change Notices: Not Supported 00:29:30.881 Normal NVM Subsystem Shutdown event: Not Supported 00:29:30.881 Zone Descriptor Change Notices: Not Supported 00:29:30.881 Discovery Log Change Notices: Not Supported 00:29:30.882 Controller Attributes 00:29:30.882 128-bit Host Identifier: Supported 00:29:30.882 Non-Operational Permissive Mode: Not Supported 00:29:30.882 NVM Sets: Not Supported 00:29:30.882 Read Recovery Levels: Not Supported 00:29:30.882 Endurance Groups: Not Supported 00:29:30.882 Predictable Latency Mode: Not Supported 00:29:30.882 Traffic Based Keep ALive: Supported 00:29:30.882 Namespace Granularity: Not Supported 00:29:30.882 SQ Associations: Not Supported 00:29:30.882 UUID List: Not Supported 00:29:30.882 Multi-Domain Subsystem: Not Supported 00:29:30.882 Fixed Capacity Management: Not Supported 00:29:30.882 Variable Capacity Management: Not Supported 00:29:30.882 Delete Endurance Group: Not Supported 00:29:30.882 Delete NVM Set: Not Supported 00:29:30.883 Extended LBA Formats Supported: Not Supported 00:29:30.883 Flexible Data Placement Supported: Not Supported 00:29:30.883 00:29:30.883 Controller Memory Buffer Support 00:29:30.883 ================================ 00:29:30.883 Supported: No 00:29:30.883 00:29:30.883 Persistent Memory Region Support 00:29:30.883 ================================ 00:29:30.883 Supported: No 00:29:30.883 00:29:30.883 Admin Command Set Attributes 00:29:30.883 ============================ 00:29:30.883 Security Send/Receive: Not Supported 00:29:30.883 Format NVM: Not Supported 00:29:30.883 Firmware Activate/Download: Not Supported 00:29:30.883 Namespace Management: Not Supported 00:29:30.883 Device Self-Test: Not Supported 00:29:30.883 Directives: Not Supported 00:29:30.883 NVMe-MI: Not Supported 00:29:30.883 Virtualization Management: Not Supported 00:29:30.883 Doorbell Buffer Config: Not Supported 00:29:30.883 Get LBA Status Capability: Not Supported 00:29:30.883 Command & Feature Lockdown Capability: Not Supported 00:29:30.883 Abort Command Limit: 4 00:29:30.883 Async Event Request Limit: 4 00:29:30.883 Number of Firmware Slots: N/A 00:29:30.883 Firmware Slot 1 Read-Only: N/A 00:29:30.885 Firmware Activation Without Reset: N/A 00:29:30.885 Multiple Update Detection Support: N/A 00:29:30.885 Firmware Update Granularity: No Information Provided 00:29:30.885 Per-Namespace SMART Log: Yes 00:29:30.885 Asymmetric Namespace Access Log Page: Supported 00:29:30.885 ANA Transition Time : 10 sec 00:29:30.886 00:29:30.886 Asymmetric Namespace Access Capabilities 00:29:30.886 ANA Optimized State : Supported 00:29:30.886 ANA Non-Optimized State : Supported 00:29:30.886 ANA Inaccessible State : Supported 00:29:30.886 ANA Persistent Loss State : Supported 00:29:30.886 ANA Change State : Supported 00:29:30.886 ANAGRPID is not changed : No 00:29:30.886 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:30.886 00:29:30.886 ANA Group Identifier Maximum : 128 00:29:30.886 Number of ANA Group Identifiers : 128 00:29:30.886 Max Number of Allowed Namespaces : 1024 00:29:30.886 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:30.886 Command Effects Log Page: Supported 00:29:30.886 Get Log Page Extended Data: Supported 00:29:30.886 Telemetry Log Pages: Not Supported 00:29:30.886 Persistent Event Log Pages: Not Supported 00:29:30.886 Supported Log Pages Log Page: May Support 00:29:30.886 Commands Supported & Effects Log Page: Not Supported 00:29:30.886 Feature Identifiers & Effects Log Page:May Support 00:29:30.886 NVMe-MI Commands & Effects Log Page: May Support 00:29:30.886 Data Area 4 for Telemetry Log: Not Supported 00:29:30.886 Error Log Page Entries Supported: 128 00:29:30.886 Keep Alive: Supported 00:29:30.886 Keep Alive Granularity: 1000 ms 00:29:30.886 00:29:30.886 NVM Command Set Attributes 00:29:30.886 ========================== 00:29:30.886 Submission Queue Entry Size 00:29:30.886 Max: 64 00:29:30.886 Min: 64 00:29:30.886 Completion Queue Entry Size 00:29:30.886 Max: 16 00:29:30.886 Min: 16 00:29:30.886 Number of Namespaces: 1024 00:29:30.886 Compare Command: Not Supported 00:29:30.886 Write Uncorrectable Command: Not Supported 00:29:30.886 Dataset Management Command: Supported 00:29:30.886 Write Zeroes Command: Supported 00:29:30.886 Set Features Save Field: Not Supported 00:29:30.886 Reservations: Not Supported 00:29:30.886 Timestamp: Not Supported 00:29:30.886 Copy: Not Supported 00:29:30.886 Volatile Write Cache: Present 00:29:30.886 Atomic Write Unit (Normal): 1 00:29:30.886 Atomic Write Unit (PFail): 1 00:29:30.886 Atomic Compare & Write Unit: 1 00:29:30.886 Fused Compare & Write: Not Supported 00:29:30.886 Scatter-Gather List 00:29:30.886 SGL Command Set: Supported 00:29:30.886 SGL Keyed: Not Supported 00:29:30.886 SGL Bit Bucket Descriptor: Not Supported 00:29:30.887 SGL Metadata Pointer: Not Supported 00:29:30.887 Oversized SGL: Not Supported 00:29:30.887 SGL Metadata Address: Not Supported 00:29:30.887 SGL Offset: Supported 00:29:30.887 Transport SGL Data Block: Not Supported 00:29:30.887 Replay Protected Memory Block: Not Supported 00:29:30.887 00:29:30.887 Firmware Slot Information 00:29:30.887 ========================= 00:29:30.887 Active slot: 0 00:29:30.887 00:29:30.887 Asymmetric Namespace Access 00:29:30.887 =========================== 00:29:30.887 Change Count : 0 00:29:30.887 Number of ANA Group Descriptors : 1 00:29:30.887 ANA Group Descriptor : 0 00:29:30.887 ANA Group ID : 1 00:29:30.887 Number of NSID Values : 1 00:29:30.887 Change Count : 0 00:29:30.887 ANA State : 1 00:29:30.887 Namespace Identifier : 1 00:29:30.887 00:29:30.887 Commands Supported and Effects 00:29:30.887 ============================== 00:29:30.887 Admin Commands 00:29:30.887 -------------- 00:29:30.887 Get Log Page (02h): Supported 00:29:30.887 Identify (06h): Supported 00:29:30.887 Abort (08h): Supported 00:29:30.887 Set Features (09h): Supported 00:29:30.887 Get Features (0Ah): Supported 00:29:30.887 Asynchronous Event Request (0Ch): Supported 00:29:30.887 Keep Alive (18h): Supported 00:29:30.887 I/O Commands 00:29:30.887 ------------ 00:29:30.887 Flush (00h): Supported 00:29:30.887 Write (01h): Supported LBA-Change 00:29:30.887 Read (02h): Supported 00:29:30.887 Write Zeroes (08h): Supported LBA-Change 00:29:30.887 Dataset Management (09h): Supported 00:29:30.887 00:29:30.887 Error Log 00:29:30.887 ========= 00:29:30.887 Entry: 0 00:29:30.888 Error Count: 0x3 00:29:30.888 Submission Queue Id: 0x0 00:29:30.888 Command Id: 0x5 00:29:30.888 Phase Bit: 0 00:29:30.888 Status Code: 0x2 00:29:30.888 Status Code Type: 0x0 00:29:30.888 Do Not Retry: 1 00:29:30.888 Error Location: 0x28 00:29:30.888 LBA: 0x0 00:29:30.888 Namespace: 0x0 00:29:30.888 Vendor Log Page: 0x0 00:29:30.888 ----------- 00:29:30.888 Entry: 1 00:29:30.888 Error Count: 0x2 00:29:30.888 Submission Queue Id: 0x0 00:29:30.888 Command Id: 0x5 00:29:30.888 Phase Bit: 0 00:29:30.888 Status Code: 0x2 00:29:30.888 Status Code Type: 0x0 00:29:30.888 Do Not Retry: 1 00:29:30.888 Error Location: 0x28 00:29:30.888 LBA: 0x0 00:29:30.888 Namespace: 0x0 00:29:30.888 Vendor Log Page: 0x0 00:29:30.888 ----------- 00:29:30.888 Entry: 2 00:29:30.888 Error Count: 0x1 00:29:30.888 Submission Queue Id: 0x0 00:29:30.888 Command Id: 0x4 00:29:30.888 Phase Bit: 0 00:29:30.888 Status Code: 0x2 00:29:30.888 Status Code Type: 0x0 00:29:30.888 Do Not Retry: 1 00:29:30.888 Error Location: 0x28 00:29:30.888 LBA: 0x0 00:29:30.888 Namespace: 0x0 00:29:30.888 Vendor Log Page: 0x0 00:29:30.888 00:29:30.888 Number of Queues 00:29:30.888 ================ 00:29:30.888 Number of I/O Submission Queues: 128 00:29:30.888 Number of I/O Completion Queues: 128 00:29:30.888 00:29:30.888 ZNS Specific Controller Data 00:29:30.888 ============================ 00:29:30.888 Zone Append Size Limit: 0 00:29:30.888 00:29:30.888 00:29:30.888 Active Namespaces 00:29:30.888 ================= 00:29:30.888 get_feature(0x05) failed 00:29:30.889 Namespace ID:1 00:29:30.889 Command Set Identifier: NVM (00h) 00:29:30.889 Deallocate: Supported 00:29:30.889 Deallocated/Unwritten Error: Not Supported 00:29:30.889 Deallocated Read Value: Unknown 00:29:30.889 Deallocate in Write Zeroes: Not Supported 00:29:30.889 Deallocated Guard Field: 0xFFFF 00:29:30.889 Flush: Supported 00:29:30.889 Reservation: Not Supported 00:29:30.889 Namespace Sharing Capabilities: Multiple Controllers 00:29:30.889 Size (in LBAs): 3125627568 (1490GiB) 00:29:30.889 Capacity (in LBAs): 3125627568 (1490GiB) 00:29:30.889 Utilization (in LBAs): 3125627568 (1490GiB) 00:29:30.889 UUID: 9b97c2e3-6238-4969-baa8-26c5cff5bc3d 00:29:30.889 Thin Provisioning: Not Supported 00:29:30.889 Per-NS Atomic Units: Yes 00:29:30.889 Atomic Boundary Size (Normal): 0 00:29:30.889 Atomic Boundary Size (PFail): 0 00:29:30.889 Atomic Boundary Offset: 0 00:29:30.889 NGUID/EUI64 Never Reused: No 00:29:30.889 ANA group ID: 1 00:29:30.889 Namespace Write Protected: No 00:29:30.889 Number of LBA Formats: 1 00:29:30.889 Current LBA Format: LBA Format #00 00:29:30.889 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:30.889 00:29:30.889 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:30.889 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:30.889 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:29:30.889 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:30.889 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:29:30.889 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:30.889 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:30.889 rmmod nvme_tcp 00:29:30.889 rmmod nvme_fabrics 00:29:30.889 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:30.889 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.890 14:01:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.430 14:01:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:33.430 14:01:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:33.430 14:01:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:33.430 14:01:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:29:33.430 14:01:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:33.430 14:01:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:33.430 14:01:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:33.430 14:01:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:33.430 14:01:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:33.430 14:01:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:33.430 14:01:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:35.966 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:35.966 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:37.345 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:37.605 00:29:37.605 real 0m17.248s 00:29:37.605 user 0m4.356s 00:29:37.605 sys 0m8.746s 00:29:37.605 14:01:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:37.605 14:01:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:37.605 ************************************ 00:29:37.605 END TEST nvmf_identify_kernel_target 00:29:37.605 ************************************ 00:29:37.605 14:01:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:37.605 14:01:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:37.605 14:01:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:37.605 14:01:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.605 ************************************ 00:29:37.605 START TEST nvmf_auth_host 00:29:37.605 ************************************ 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:37.605 * Looking for test storage... 00:29:37.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.605 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:37.865 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.866 --rc genhtml_branch_coverage=1 00:29:37.866 --rc genhtml_function_coverage=1 00:29:37.866 --rc genhtml_legend=1 00:29:37.866 --rc geninfo_all_blocks=1 00:29:37.866 --rc geninfo_unexecuted_blocks=1 00:29:37.866 00:29:37.866 ' 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.866 --rc genhtml_branch_coverage=1 00:29:37.866 --rc genhtml_function_coverage=1 00:29:37.866 --rc genhtml_legend=1 00:29:37.866 --rc geninfo_all_blocks=1 00:29:37.866 --rc geninfo_unexecuted_blocks=1 00:29:37.866 00:29:37.866 ' 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.866 --rc genhtml_branch_coverage=1 00:29:37.866 --rc genhtml_function_coverage=1 00:29:37.866 --rc genhtml_legend=1 00:29:37.866 --rc geninfo_all_blocks=1 00:29:37.866 --rc geninfo_unexecuted_blocks=1 00:29:37.866 00:29:37.866 ' 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.866 --rc genhtml_branch_coverage=1 00:29:37.866 --rc genhtml_function_coverage=1 00:29:37.866 --rc genhtml_legend=1 00:29:37.866 --rc geninfo_all_blocks=1 00:29:37.866 --rc geninfo_unexecuted_blocks=1 00:29:37.866 00:29:37.866 ' 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:37.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:37.866 14:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:44.435 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:44.435 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:44.436 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:44.436 Found net devices under 0000:86:00.0: cvl_0_0 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:44.436 Found net devices under 0000:86:00.1: cvl_0_1 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:44.436 14:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:44.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:44.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:29:44.436 00:29:44.436 --- 10.0.0.2 ping statistics --- 00:29:44.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.436 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:44.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:44.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:29:44.436 00:29:44.436 --- 10.0.0.1 ping statistics --- 00:29:44.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.436 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=798597 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 798597 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 798597 ']' 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9e1dbd81e411f8c06a713686ca16c204 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.UdC 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9e1dbd81e411f8c06a713686ca16c204 0 00:29:44.436 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9e1dbd81e411f8c06a713686ca16c204 0 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9e1dbd81e411f8c06a713686ca16c204 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.UdC 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.UdC 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.UdC 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d34635121386492b7771c153b8cb7090827a6771ba33fcc7477136f7a8fa7d4e 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1x3 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d34635121386492b7771c153b8cb7090827a6771ba33fcc7477136f7a8fa7d4e 3 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d34635121386492b7771c153b8cb7090827a6771ba33fcc7477136f7a8fa7d4e 3 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d34635121386492b7771c153b8cb7090827a6771ba33fcc7477136f7a8fa7d4e 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1x3 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1x3 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.1x3 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=67702019b24461bcfa9c3b457a4a61122e2de99ca8c3d402 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.WVV 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 67702019b24461bcfa9c3b457a4a61122e2de99ca8c3d402 0 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 67702019b24461bcfa9c3b457a4a61122e2de99ca8c3d402 0 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=67702019b24461bcfa9c3b457a4a61122e2de99ca8c3d402 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.WVV 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.WVV 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.WVV 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=273119e62938c196319021ae220d08470e14425edffd39ac 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yc1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 273119e62938c196319021ae220d08470e14425edffd39ac 2 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 273119e62938c196319021ae220d08470e14425edffd39ac 2 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=273119e62938c196319021ae220d08470e14425edffd39ac 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yc1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yc1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.yc1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a6c1bc4d623ac3ee020a1c2635d77cb9 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Xd7 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a6c1bc4d623ac3ee020a1c2635d77cb9 1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a6c1bc4d623ac3ee020a1c2635d77cb9 1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a6c1bc4d623ac3ee020a1c2635d77cb9 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Xd7 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Xd7 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Xd7 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=136f165e1688aae9a97bf238c9fb195c 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Al7 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 136f165e1688aae9a97bf238c9fb195c 1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 136f165e1688aae9a97bf238c9fb195c 1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=136f165e1688aae9a97bf238c9fb195c 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Al7 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Al7 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Al7 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.437 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bffb441473272f53472df5c523acfc538a9121106cc96a2b 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Y9E 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bffb441473272f53472df5c523acfc538a9121106cc96a2b 2 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bffb441473272f53472df5c523acfc538a9121106cc96a2b 2 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bffb441473272f53472df5c523acfc538a9121106cc96a2b 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Y9E 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Y9E 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Y9E 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a21850a3093804daae220774c0b0d4b3 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.gho 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a21850a3093804daae220774c0b0d4b3 0 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a21850a3093804daae220774c0b0d4b3 0 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a21850a3093804daae220774c0b0d4b3 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.gho 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.gho 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.gho 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5b49f343fec6214eebbce14edaa78bfed382a55a39f933058c7f7002defbf1a0 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oY7 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5b49f343fec6214eebbce14edaa78bfed382a55a39f933058c7f7002defbf1a0 3 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5b49f343fec6214eebbce14edaa78bfed382a55a39f933058c7f7002defbf1a0 3 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5b49f343fec6214eebbce14edaa78bfed382a55a39f933058c7f7002defbf1a0 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.438 14:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oY7 00:29:44.438 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oY7 00:29:44.438 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.oY7 00:29:44.438 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:44.438 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 798597 00:29:44.438 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 798597 ']' 00:29:44.438 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.438 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.438 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.438 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.438 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UdC 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.1x3 ]] 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1x3 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.WVV 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.yc1 ]] 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yc1 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Xd7 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Al7 ]] 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Al7 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Y9E 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.gho ]] 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.gho 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.698 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.oY7 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:44.955 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:44.956 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:29:44.956 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:44.956 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:44.956 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:44.956 14:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:47.484 Waiting for block devices as requested 00:29:47.485 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:47.743 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:47.743 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:47.743 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:48.000 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:48.000 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:48.000 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:48.000 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:48.259 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:48.259 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:48.259 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:48.259 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:48.519 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:48.519 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:48.519 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:48.778 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:48.778 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:49.346 No valid GPT data, bailing 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:49.346 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:49.606 00:29:49.606 Discovery Log Number of Records 2, Generation counter 2 00:29:49.606 =====Discovery Log Entry 0====== 00:29:49.606 trtype: tcp 00:29:49.606 adrfam: ipv4 00:29:49.606 subtype: current discovery subsystem 00:29:49.606 treq: not specified, sq flow control disable supported 00:29:49.606 portid: 1 00:29:49.606 trsvcid: 4420 00:29:49.606 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:49.606 traddr: 10.0.0.1 00:29:49.606 eflags: none 00:29:49.606 sectype: none 00:29:49.606 =====Discovery Log Entry 1====== 00:29:49.606 trtype: tcp 00:29:49.606 adrfam: ipv4 00:29:49.606 subtype: nvme subsystem 00:29:49.606 treq: not specified, sq flow control disable supported 00:29:49.606 portid: 1 00:29:49.606 trsvcid: 4420 00:29:49.606 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:49.606 traddr: 10.0.0.1 00:29:49.606 eflags: none 00:29:49.606 sectype: none 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.606 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.607 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.607 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.607 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.607 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.607 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.607 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.607 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.607 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.607 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:49.607 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.607 14:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.607 nvme0n1 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:49.607 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.866 nvme0n1 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:49.866 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.867 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.126 nvme0n1 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.126 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.385 nvme0n1 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:50.385 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.386 14:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.645 nvme0n1 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.645 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.904 nvme0n1 00:29:50.904 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.904 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.904 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.905 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.164 nvme0n1 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.164 nvme0n1 00:29:51.164 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:29:51.423 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.424 nvme0n1 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.424 14:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.683 nvme0n1 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.683 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:51.942 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.943 nvme0n1 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.943 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.203 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.462 nvme0n1 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:52.462 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:52.463 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:52.463 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.463 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.463 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:52.463 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.463 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:52.463 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:52.463 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:52.463 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:52.463 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.463 14:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.721 nvme0n1 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.721 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.978 nvme0n1 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.978 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.272 nvme0n1 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.272 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.612 14:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.612 nvme0n1 00:29:53.612 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.612 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.612 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.612 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.612 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.612 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.612 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.612 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.612 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.612 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.871 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.131 nvme0n1 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.131 14:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.699 nvme0n1 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.699 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.958 nvme0n1 00:29:54.958 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.958 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.958 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.958 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.958 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.958 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.217 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.476 nvme0n1 00:29:55.476 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.476 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.476 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.476 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.476 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.477 14:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.477 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.045 nvme0n1 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.045 14:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.613 nvme0n1 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.613 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.181 nvme0n1 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.181 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.182 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.182 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.182 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.182 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.182 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.182 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.182 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:57.182 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.182 14:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.748 nvme0n1 00:29:57.748 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.748 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.749 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.749 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.749 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.749 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.007 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.577 nvme0n1 00:29:58.577 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.577 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:58.577 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.577 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.577 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.577 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.577 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.577 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.577 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.577 14:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.577 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.144 nvme0n1 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:59.144 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.145 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.403 nvme0n1 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.403 14:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.661 nvme0n1 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:59.661 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.662 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.920 nvme0n1 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.920 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.920 nvme0n1 00:29:59.921 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.921 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.921 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.921 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.921 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.179 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.180 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:00.180 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.180 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.180 nvme0n1 00:30:00.180 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.180 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.180 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:00.180 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.180 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.180 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.439 nvme0n1 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.439 14:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.439 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.439 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.439 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.439 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.698 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.698 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.698 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:30:00.698 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.698 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:00.698 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:00.698 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:00.698 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:00.698 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:00.698 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:00.698 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:00.698 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:00.698 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.699 nvme0n1 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.958 nvme0n1 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:00.958 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.217 nvme0n1 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.217 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.477 nvme0n1 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.477 14:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.477 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.736 nvme0n1 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.736 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.737 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.996 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.255 nvme0n1 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.255 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.513 nvme0n1 00:30:02.513 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.513 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.513 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.513 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.513 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.513 14:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.513 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.513 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.513 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.513 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.513 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.513 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.513 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:02.513 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.513 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:02.513 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:02.513 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:02.513 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:02.513 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.514 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.771 nvme0n1 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:02.771 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.029 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.030 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.030 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:03.030 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.030 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.030 nvme0n1 00:30:03.030 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.030 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.030 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.030 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.030 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.287 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.288 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.288 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.288 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.288 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.288 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.288 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.288 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.288 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.288 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:03.288 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.288 14:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.545 nvme0n1 00:30:03.545 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.545 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.545 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.545 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.545 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.545 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.546 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.113 nvme0n1 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.113 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.371 nvme0n1 00:30:04.371 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.371 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.371 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.371 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.371 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.630 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.630 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.630 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.630 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.630 14:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.630 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.889 nvme0n1 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.889 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.147 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.147 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.147 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.147 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:05.147 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.147 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.406 nvme0n1 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.406 14:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.972 nvme0n1 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:05.972 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.230 14:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.798 nvme0n1 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.798 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.799 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.799 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.799 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:06.799 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.799 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.368 nvme0n1 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.368 14:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.936 nvme0n1 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.936 14:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.506 nvme0n1 00:30:08.506 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.506 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.506 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.506 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.506 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.506 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.506 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.506 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.506 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.506 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.765 nvme0n1 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:30:08.765 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:08.766 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:30:08.766 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.766 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:08.766 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:08.766 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:08.766 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.766 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:08.766 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.766 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.024 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.025 nvme0n1 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.025 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.284 nvme0n1 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.284 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:09.285 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.285 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.544 nvme0n1 00:30:09.544 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.544 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.544 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.544 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.544 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.544 14:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.544 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.544 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.544 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.545 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.805 nvme0n1 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.805 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.806 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.806 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.806 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.806 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.806 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.806 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.806 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:09.806 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.806 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.066 nvme0n1 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.066 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.328 nvme0n1 00:30:10.328 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.328 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.328 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.328 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.328 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.328 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.329 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.588 nvme0n1 00:30:10.588 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.588 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.588 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.588 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.588 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.588 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.588 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.588 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.588 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.588 14:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.588 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.847 nvme0n1 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.847 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.107 nvme0n1 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.107 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.366 nvme0n1 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.366 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.367 14:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.626 nvme0n1 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.626 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.885 nvme0n1 00:30:11.885 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.885 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.885 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.885 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.885 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.885 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.885 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.885 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.885 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.885 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.144 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.402 nvme0n1 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.402 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.403 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:12.403 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.403 14:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.668 nvme0n1 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:12.668 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:12.669 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:12.669 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:12.669 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:12.669 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:12.669 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:30:12.669 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:12.669 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:30:12.669 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.669 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:12.669 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:12.669 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:12.669 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.669 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.670 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.936 nvme0n1 00:30:12.936 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.936 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.936 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.936 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.936 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.936 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.195 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.454 nvme0n1 00:30:13.454 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.454 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:13.454 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:13.454 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.454 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.454 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.454 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.454 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.454 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.454 14:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:13.454 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.455 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.022 nvme0n1 00:30:14.022 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.022 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.022 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.022 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.022 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.022 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.022 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.022 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.022 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.022 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.023 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.282 nvme0n1 00:30:14.282 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.282 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.282 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.282 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.282 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.541 14:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.800 nvme0n1 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxZGJkODFlNDExZjhjMDZhNzEzNjg2Y2ExNmMyMDSVEwwe: 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: ]] 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDM0NjM1MTIxMzg2NDkyYjc3NzFjMTUzYjhjYjcwOTA4MjdhNjc3MWJhMzNmY2M3NDc3MTM2ZjdhOGZhN2Q0ZTBmLPE=: 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.800 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.367 nvme0n1 00:30:15.367 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.626 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.626 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:15.626 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.626 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.626 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.626 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.626 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.626 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.626 14:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.626 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.195 nvme0n1 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.195 14:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.763 nvme0n1 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmZmYjQ0MTQ3MzI3MmY1MzQ3MmRmNWM1MjNhY2ZjNTM4YTkxMjExMDZjYzk2YTJiekRfRw==: 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: ]] 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIxODUwYTMwOTM4MDRkYWFlMjIwNzc0YzBiMGQ0YjObcnk1: 00:30:16.763 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.764 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.331 nvme0n1 00:30:17.331 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.331 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.331 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.331 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.331 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0OWYzNDNmZWM2MjE0ZWViYmNlMTRlZGFhNzhiZmVkMzgyYTU1YTM5ZjkzMzA1OGM3ZjcwMDJkZWZiZjFhMJInd18=: 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.590 14:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.158 nvme0n1 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.159 request: 00:30:18.159 { 00:30:18.159 "name": "nvme0", 00:30:18.159 "trtype": "tcp", 00:30:18.159 "traddr": "10.0.0.1", 00:30:18.159 "adrfam": "ipv4", 00:30:18.159 "trsvcid": "4420", 00:30:18.159 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:18.159 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:18.159 "prchk_reftag": false, 00:30:18.159 "prchk_guard": false, 00:30:18.159 "hdgst": false, 00:30:18.159 "ddgst": false, 00:30:18.159 "allow_unrecognized_csi": false, 00:30:18.159 "method": "bdev_nvme_attach_controller", 00:30:18.159 "req_id": 1 00:30:18.159 } 00:30:18.159 Got JSON-RPC error response 00:30:18.159 response: 00:30:18.159 { 00:30:18.159 "code": -5, 00:30:18.159 "message": "Input/output error" 00:30:18.159 } 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.159 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.419 request: 00:30:18.419 { 00:30:18.419 "name": "nvme0", 00:30:18.419 "trtype": "tcp", 00:30:18.419 "traddr": "10.0.0.1", 00:30:18.419 "adrfam": "ipv4", 00:30:18.419 "trsvcid": "4420", 00:30:18.420 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:18.420 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:18.420 "prchk_reftag": false, 00:30:18.420 "prchk_guard": false, 00:30:18.420 "hdgst": false, 00:30:18.420 "ddgst": false, 00:30:18.420 "dhchap_key": "key2", 00:30:18.420 "allow_unrecognized_csi": false, 00:30:18.420 "method": "bdev_nvme_attach_controller", 00:30:18.420 "req_id": 1 00:30:18.420 } 00:30:18.420 Got JSON-RPC error response 00:30:18.420 response: 00:30:18.420 { 00:30:18.420 "code": -5, 00:30:18.420 "message": "Input/output error" 00:30:18.420 } 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.420 request: 00:30:18.420 { 00:30:18.420 "name": "nvme0", 00:30:18.420 "trtype": "tcp", 00:30:18.420 "traddr": "10.0.0.1", 00:30:18.420 "adrfam": "ipv4", 00:30:18.420 "trsvcid": "4420", 00:30:18.420 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:18.420 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:18.420 "prchk_reftag": false, 00:30:18.420 "prchk_guard": false, 00:30:18.420 "hdgst": false, 00:30:18.420 "ddgst": false, 00:30:18.420 "dhchap_key": "key1", 00:30:18.420 "dhchap_ctrlr_key": "ckey2", 00:30:18.420 "allow_unrecognized_csi": false, 00:30:18.420 "method": "bdev_nvme_attach_controller", 00:30:18.420 "req_id": 1 00:30:18.420 } 00:30:18.420 Got JSON-RPC error response 00:30:18.420 response: 00:30:18.420 { 00:30:18.420 "code": -5, 00:30:18.420 "message": "Input/output error" 00:30:18.420 } 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.420 14:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.680 nvme0n1 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.680 request: 00:30:18.680 { 00:30:18.680 "name": "nvme0", 00:30:18.680 "dhchap_key": "key1", 00:30:18.680 "dhchap_ctrlr_key": "ckey2", 00:30:18.680 "method": "bdev_nvme_set_keys", 00:30:18.680 "req_id": 1 00:30:18.680 } 00:30:18.680 Got JSON-RPC error response 00:30:18.680 response: 00:30:18.680 { 00:30:18.680 "code": -13, 00:30:18.680 "message": "Permission denied" 00:30:18.680 } 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.680 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.938 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:18.938 14:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:19.882 14:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.882 14:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:19.882 14:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.882 14:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.882 14:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.882 14:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:19.882 14:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc3MDIwMTliMjQ0NjFiY2ZhOWMzYjQ1N2E0YTYxMTIyZTJkZTk5Y2E4YzNkNDAyv6uPAg==: 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: ]] 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjczMTE5ZTYyOTM4YzE5NjMxOTAyMWFlMjIwZDA4NDcwZTE0NDI1ZWRmZmQzOWFjzZdreQ==: 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.819 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.078 nvme0n1 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTZjMWJjNGQ2MjNhYzNlZTAyMGExYzI2MzVkNzdjYjmC2gyB: 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: ]] 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTM2ZjE2NWUxNjg4YWFlOWE5N2JmMjM4YzlmYjE5NWPlsC6W: 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:21.078 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.079 request: 00:30:21.079 { 00:30:21.079 "name": "nvme0", 00:30:21.079 "dhchap_key": "key2", 00:30:21.079 "dhchap_ctrlr_key": "ckey1", 00:30:21.079 "method": "bdev_nvme_set_keys", 00:30:21.079 "req_id": 1 00:30:21.079 } 00:30:21.079 Got JSON-RPC error response 00:30:21.079 response: 00:30:21.079 { 00:30:21.079 "code": -13, 00:30:21.079 "message": "Permission denied" 00:30:21.079 } 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:30:21.079 14:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:22.455 rmmod nvme_tcp 00:30:22.455 rmmod nvme_fabrics 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 798597 ']' 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 798597 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 798597 ']' 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 798597 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 798597 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 798597' 00:30:22.455 killing process with pid 798597 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 798597 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 798597 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:22.455 14:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.984 14:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:24.984 14:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:24.984 14:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:24.984 14:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:30:24.984 14:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:24.984 14:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:30:24.984 14:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:24.984 14:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:24.984 14:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:24.984 14:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:24.984 14:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:24.984 14:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:24.984 14:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:27.519 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:27.519 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:28.898 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:28.898 14:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.UdC /tmp/spdk.key-null.WVV /tmp/spdk.key-sha256.Xd7 /tmp/spdk.key-sha384.Y9E /tmp/spdk.key-sha512.oY7 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:28.898 14:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:32.189 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:32.189 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:32.189 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:32.189 00:30:32.189 real 0m54.327s 00:30:32.189 user 0m48.403s 00:30:32.189 sys 0m12.782s 00:30:32.189 14:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.189 14:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.189 ************************************ 00:30:32.189 END TEST nvmf_auth_host 00:30:32.189 ************************************ 00:30:32.189 14:02:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.190 ************************************ 00:30:32.190 START TEST nvmf_digest 00:30:32.190 ************************************ 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:32.190 * Looking for test storage... 00:30:32.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:32.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.190 --rc genhtml_branch_coverage=1 00:30:32.190 --rc genhtml_function_coverage=1 00:30:32.190 --rc genhtml_legend=1 00:30:32.190 --rc geninfo_all_blocks=1 00:30:32.190 --rc geninfo_unexecuted_blocks=1 00:30:32.190 00:30:32.190 ' 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:32.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.190 --rc genhtml_branch_coverage=1 00:30:32.190 --rc genhtml_function_coverage=1 00:30:32.190 --rc genhtml_legend=1 00:30:32.190 --rc geninfo_all_blocks=1 00:30:32.190 --rc geninfo_unexecuted_blocks=1 00:30:32.190 00:30:32.190 ' 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:32.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.190 --rc genhtml_branch_coverage=1 00:30:32.190 --rc genhtml_function_coverage=1 00:30:32.190 --rc genhtml_legend=1 00:30:32.190 --rc geninfo_all_blocks=1 00:30:32.190 --rc geninfo_unexecuted_blocks=1 00:30:32.190 00:30:32.190 ' 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:32.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.190 --rc genhtml_branch_coverage=1 00:30:32.190 --rc genhtml_function_coverage=1 00:30:32.190 --rc genhtml_legend=1 00:30:32.190 --rc geninfo_all_blocks=1 00:30:32.190 --rc geninfo_unexecuted_blocks=1 00:30:32.190 00:30:32.190 ' 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:32.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:32.190 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:30:32.191 14:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:38.867 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:38.867 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:38.867 Found net devices under 0000:86:00.0: cvl_0_0 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:38.867 Found net devices under 0000:86:00.1: cvl_0_1 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.867 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:38.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:30:38.868 00:30:38.868 --- 10.0.0.2 ping statistics --- 00:30:38.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.868 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:30:38.868 00:30:38.868 --- 10.0.0.1 ping statistics --- 00:30:38.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.868 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:38.868 ************************************ 00:30:38.868 START TEST nvmf_digest_clean 00:30:38.868 ************************************ 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=812368 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 812368 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 812368 ']' 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:38.868 [2024-12-05 14:02:20.634898] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:30:38.868 [2024-12-05 14:02:20.634941] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.868 [2024-12-05 14:02:20.694631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.868 [2024-12-05 14:02:20.733379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.868 [2024-12-05 14:02:20.733414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.868 [2024-12-05 14:02:20.733422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.868 [2024-12-05 14:02:20.733428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.868 [2024-12-05 14:02:20.733433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.868 [2024-12-05 14:02:20.733985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:38.868 null0 00:30:38.868 [2024-12-05 14:02:20.917632] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.868 [2024-12-05 14:02:20.941833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=812387 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 812387 /var/tmp/bperf.sock 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 812387 ']' 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:38.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.868 14:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:38.868 [2024-12-05 14:02:20.994046] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:30:38.868 [2024-12-05 14:02:20.994086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812387 ] 00:30:38.868 [2024-12-05 14:02:21.068315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.868 [2024-12-05 14:02:21.109664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.868 14:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.868 14:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:38.868 14:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:38.868 14:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:38.868 14:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:38.868 14:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:38.868 14:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:39.165 nvme0n1 00:30:39.165 14:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:39.165 14:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:39.165 Running I/O for 2 seconds... 00:30:41.476 25516.00 IOPS, 99.67 MiB/s [2024-12-05T13:02:24.063Z] 25338.50 IOPS, 98.98 MiB/s 00:30:41.476 Latency(us) 00:30:41.476 [2024-12-05T13:02:24.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.476 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:41.476 nvme0n1 : 2.05 24822.36 96.96 0.00 0.00 5050.79 2402.99 44689.31 00:30:41.476 [2024-12-05T13:02:24.063Z] =================================================================================================================== 00:30:41.476 [2024-12-05T13:02:24.063Z] Total : 24822.36 96.96 0.00 0.00 5050.79 2402.99 44689.31 00:30:41.476 { 00:30:41.476 "results": [ 00:30:41.476 { 00:30:41.476 "job": "nvme0n1", 00:30:41.476 "core_mask": "0x2", 00:30:41.476 "workload": "randread", 00:30:41.476 "status": "finished", 00:30:41.476 "queue_depth": 128, 00:30:41.476 "io_size": 4096, 00:30:41.476 "runtime": 2.046945, 00:30:41.476 "iops": 24822.35722015003, 00:30:41.476 "mibps": 96.96233289121105, 00:30:41.476 "io_failed": 0, 00:30:41.476 "io_timeout": 0, 00:30:41.476 "avg_latency_us": 5050.787001134009, 00:30:41.476 "min_latency_us": 2402.9866666666667, 00:30:41.476 "max_latency_us": 44689.310476190476 00:30:41.476 } 00:30:41.476 ], 00:30:41.476 "core_count": 1 00:30:41.476 } 00:30:41.476 14:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:41.476 14:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:41.476 14:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:41.476 14:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:41.476 | select(.opcode=="crc32c") 00:30:41.476 | "\(.module_name) \(.executed)"' 00:30:41.476 14:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:41.476 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:41.476 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:41.476 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:41.476 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:41.476 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 812387 00:30:41.476 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 812387 ']' 00:30:41.476 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 812387 00:30:41.476 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:41.476 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:41.476 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 812387 00:30:41.476 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 812387' 00:30:41.734 killing process with pid 812387 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 812387 00:30:41.734 Received shutdown signal, test time was about 2.000000 seconds 00:30:41.734 00:30:41.734 Latency(us) 00:30:41.734 [2024-12-05T13:02:24.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.734 [2024-12-05T13:02:24.321Z] =================================================================================================================== 00:30:41.734 [2024-12-05T13:02:24.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 812387 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=812861 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 812861 /var/tmp/bperf.sock 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 812861 ']' 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:41.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:41.734 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:41.734 [2024-12-05 14:02:24.266909] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:30:41.734 [2024-12-05 14:02:24.266958] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812861 ] 00:30:41.734 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:41.734 Zero copy mechanism will not be used. 00:30:41.991 [2024-12-05 14:02:24.342142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.991 [2024-12-05 14:02:24.379178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.991 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:41.991 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:41.991 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:41.991 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:41.991 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:42.249 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:42.249 14:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:42.507 nvme0n1 00:30:42.507 14:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:42.507 14:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:42.764 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:42.764 Zero copy mechanism will not be used. 00:30:42.764 Running I/O for 2 seconds... 00:30:44.631 5417.00 IOPS, 677.12 MiB/s [2024-12-05T13:02:27.218Z] 5562.00 IOPS, 695.25 MiB/s 00:30:44.631 Latency(us) 00:30:44.631 [2024-12-05T13:02:27.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.631 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:44.631 nvme0n1 : 2.00 5563.22 695.40 0.00 0.00 2873.30 624.15 6553.60 00:30:44.631 [2024-12-05T13:02:27.218Z] =================================================================================================================== 00:30:44.631 [2024-12-05T13:02:27.218Z] Total : 5563.22 695.40 0.00 0.00 2873.30 624.15 6553.60 00:30:44.631 { 00:30:44.631 "results": [ 00:30:44.631 { 00:30:44.631 "job": "nvme0n1", 00:30:44.631 "core_mask": "0x2", 00:30:44.631 "workload": "randread", 00:30:44.631 "status": "finished", 00:30:44.631 "queue_depth": 16, 00:30:44.631 "io_size": 131072, 00:30:44.631 "runtime": 2.002438, 00:30:44.631 "iops": 5563.218436725631, 00:30:44.631 "mibps": 695.4023045907039, 00:30:44.631 "io_failed": 0, 00:30:44.631 "io_timeout": 0, 00:30:44.631 "avg_latency_us": 2873.2962010771994, 00:30:44.631 "min_latency_us": 624.152380952381, 00:30:44.631 "max_latency_us": 6553.6 00:30:44.631 } 00:30:44.631 ], 00:30:44.631 "core_count": 1 00:30:44.631 } 00:30:44.631 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:44.631 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:44.631 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:44.631 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:44.631 | select(.opcode=="crc32c") 00:30:44.631 | "\(.module_name) \(.executed)"' 00:30:44.631 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 812861 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 812861 ']' 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 812861 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 812861 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 812861' 00:30:44.889 killing process with pid 812861 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 812861 00:30:44.889 Received shutdown signal, test time was about 2.000000 seconds 00:30:44.889 00:30:44.889 Latency(us) 00:30:44.889 [2024-12-05T13:02:27.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.889 [2024-12-05T13:02:27.476Z] =================================================================================================================== 00:30:44.889 [2024-12-05T13:02:27.476Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:44.889 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 812861 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=813481 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 813481 /var/tmp/bperf.sock 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 813481 ']' 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:45.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.147 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:45.147 [2024-12-05 14:02:27.594023] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:30:45.147 [2024-12-05 14:02:27.594070] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813481 ] 00:30:45.147 [2024-12-05 14:02:27.667932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.147 [2024-12-05 14:02:27.709461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.406 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.406 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:45.406 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:45.406 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:45.406 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:45.665 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:45.665 14:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:45.923 nvme0n1 00:30:45.924 14:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:45.924 14:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:45.924 Running I/O for 2 seconds... 00:30:48.237 28864.00 IOPS, 112.75 MiB/s [2024-12-05T13:02:30.824Z] 28829.00 IOPS, 112.61 MiB/s 00:30:48.237 Latency(us) 00:30:48.237 [2024-12-05T13:02:30.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.237 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.237 nvme0n1 : 2.00 28847.52 112.69 0.00 0.00 4432.77 1771.03 11047.50 00:30:48.237 [2024-12-05T13:02:30.824Z] =================================================================================================================== 00:30:48.237 [2024-12-05T13:02:30.824Z] Total : 28847.52 112.69 0.00 0.00 4432.77 1771.03 11047.50 00:30:48.237 { 00:30:48.237 "results": [ 00:30:48.237 { 00:30:48.237 "job": "nvme0n1", 00:30:48.237 "core_mask": "0x2", 00:30:48.237 "workload": "randwrite", 00:30:48.237 "status": "finished", 00:30:48.237 "queue_depth": 128, 00:30:48.237 "io_size": 4096, 00:30:48.237 "runtime": 2.003153, 00:30:48.237 "iops": 28847.521881753415, 00:30:48.237 "mibps": 112.68563235059928, 00:30:48.237 "io_failed": 0, 00:30:48.237 "io_timeout": 0, 00:30:48.237 "avg_latency_us": 4432.7702059981575, 00:30:48.237 "min_latency_us": 1771.032380952381, 00:30:48.237 "max_latency_us": 11047.497142857143 00:30:48.237 } 00:30:48.237 ], 00:30:48.237 "core_count": 1 00:30:48.237 } 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:48.237 | select(.opcode=="crc32c") 00:30:48.237 | "\(.module_name) \(.executed)"' 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 813481 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 813481 ']' 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 813481 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 813481 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 813481' 00:30:48.237 killing process with pid 813481 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 813481 00:30:48.237 Received shutdown signal, test time was about 2.000000 seconds 00:30:48.237 00:30:48.237 Latency(us) 00:30:48.237 [2024-12-05T13:02:30.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.237 [2024-12-05T13:02:30.824Z] =================================================================================================================== 00:30:48.237 [2024-12-05T13:02:30.824Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:48.237 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 813481 00:30:48.496 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:48.496 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:48.496 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:48.496 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:48.496 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:48.496 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:48.497 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:48.497 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=814027 00:30:48.497 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 814027 /var/tmp/bperf.sock 00:30:48.497 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:48.497 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 814027 ']' 00:30:48.497 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:48.497 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.497 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:48.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:48.497 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.497 14:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:48.497 [2024-12-05 14:02:30.959464] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:30:48.497 [2024-12-05 14:02:30.959515] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814027 ] 00:30:48.497 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:48.497 Zero copy mechanism will not be used. 00:30:48.497 [2024-12-05 14:02:31.034718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.497 [2024-12-05 14:02:31.071248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.756 14:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:48.756 14:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:48.756 14:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:48.756 14:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:48.756 14:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:49.015 14:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:49.015 14:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:49.273 nvme0n1 00:30:49.273 14:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:49.273 14:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:49.533 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:49.533 Zero copy mechanism will not be used. 00:30:49.533 Running I/O for 2 seconds... 00:30:51.406 6443.00 IOPS, 805.38 MiB/s [2024-12-05T13:02:33.993Z] 6475.00 IOPS, 809.38 MiB/s 00:30:51.406 Latency(us) 00:30:51.406 [2024-12-05T13:02:33.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.406 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:51.406 nvme0n1 : 2.00 6471.15 808.89 0.00 0.00 2468.07 1958.28 11047.50 00:30:51.406 [2024-12-05T13:02:33.993Z] =================================================================================================================== 00:30:51.406 [2024-12-05T13:02:33.993Z] Total : 6471.15 808.89 0.00 0.00 2468.07 1958.28 11047.50 00:30:51.406 { 00:30:51.406 "results": [ 00:30:51.406 { 00:30:51.406 "job": "nvme0n1", 00:30:51.406 "core_mask": "0x2", 00:30:51.406 "workload": "randwrite", 00:30:51.406 "status": "finished", 00:30:51.406 "queue_depth": 16, 00:30:51.406 "io_size": 131072, 00:30:51.406 "runtime": 2.003661, 00:30:51.406 "iops": 6471.1545515933085, 00:30:51.406 "mibps": 808.8943189491636, 00:30:51.406 "io_failed": 0, 00:30:51.406 "io_timeout": 0, 00:30:51.406 "avg_latency_us": 2468.0699323505432, 00:30:51.406 "min_latency_us": 1958.2780952380951, 00:30:51.406 "max_latency_us": 11047.497142857143 00:30:51.406 } 00:30:51.406 ], 00:30:51.406 "core_count": 1 00:30:51.406 } 00:30:51.406 14:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:51.406 14:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:51.406 14:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:51.406 14:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:51.406 | select(.opcode=="crc32c") 00:30:51.406 | "\(.module_name) \(.executed)"' 00:30:51.406 14:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 814027 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 814027 ']' 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 814027 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 814027 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 814027' 00:30:51.666 killing process with pid 814027 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 814027 00:30:51.666 Received shutdown signal, test time was about 2.000000 seconds 00:30:51.666 00:30:51.666 Latency(us) 00:30:51.666 [2024-12-05T13:02:34.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.666 [2024-12-05T13:02:34.253Z] =================================================================================================================== 00:30:51.666 [2024-12-05T13:02:34.253Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:51.666 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 814027 00:30:51.926 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 812368 00:30:51.926 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 812368 ']' 00:30:51.926 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 812368 00:30:51.926 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:51.926 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:51.926 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 812368 00:30:51.926 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:51.926 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:51.926 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 812368' 00:30:51.926 killing process with pid 812368 00:30:51.926 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 812368 00:30:51.926 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 812368 00:30:52.185 00:30:52.185 real 0m13.959s 00:30:52.185 user 0m26.791s 00:30:52.185 sys 0m4.480s 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:52.185 ************************************ 00:30:52.185 END TEST nvmf_digest_clean 00:30:52.185 ************************************ 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:52.185 ************************************ 00:30:52.185 START TEST nvmf_digest_error 00:30:52.185 ************************************ 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=814636 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 814636 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 814636 ']' 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.185 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:52.185 [2024-12-05 14:02:34.674501] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:30:52.185 [2024-12-05 14:02:34.674546] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.185 [2024-12-05 14:02:34.752932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.444 [2024-12-05 14:02:34.794149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.444 [2024-12-05 14:02:34.794180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.444 [2024-12-05 14:02:34.794187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.444 [2024-12-05 14:02:34.794193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.444 [2024-12-05 14:02:34.794198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.444 [2024-12-05 14:02:34.794768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:52.444 [2024-12-05 14:02:34.859204] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:52.444 null0 00:30:52.444 [2024-12-05 14:02:34.955190] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.444 [2024-12-05 14:02:34.979394] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=814764 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 814764 /var/tmp/bperf.sock 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 814764 ']' 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:52.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.444 14:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:52.702 [2024-12-05 14:02:35.032593] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:30:52.702 [2024-12-05 14:02:35.032634] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814764 ] 00:30:52.702 [2024-12-05 14:02:35.105516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.702 [2024-12-05 14:02:35.145689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.702 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.702 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:52.702 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:52.702 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:52.960 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:52.960 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.960 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:52.960 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.960 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:52.960 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:53.527 nvme0n1 00:30:53.527 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:53.527 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.527 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:53.527 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.527 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:53.527 14:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:53.527 Running I/O for 2 seconds... 00:30:53.527 [2024-12-05 14:02:36.040751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.527 [2024-12-05 14:02:36.040782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.527 [2024-12-05 14:02:36.040796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.527 [2024-12-05 14:02:36.053094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.527 [2024-12-05 14:02:36.053119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.527 [2024-12-05 14:02:36.053128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.527 [2024-12-05 14:02:36.065276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.527 [2024-12-05 14:02:36.065298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.527 [2024-12-05 14:02:36.065307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.527 [2024-12-05 14:02:36.078089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.527 [2024-12-05 14:02:36.078113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.527 [2024-12-05 14:02:36.078121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.527 [2024-12-05 14:02:36.088587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.527 [2024-12-05 14:02:36.088609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.527 [2024-12-05 14:02:36.088618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.527 [2024-12-05 14:02:36.097010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.527 [2024-12-05 14:02:36.097030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.527 [2024-12-05 14:02:36.097040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.527 [2024-12-05 14:02:36.106366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.527 [2024-12-05 14:02:36.106395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.528 [2024-12-05 14:02:36.106403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.787 [2024-12-05 14:02:36.115751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.787 [2024-12-05 14:02:36.115772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.787 [2024-12-05 14:02:36.115781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.787 [2024-12-05 14:02:36.125518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.787 [2024-12-05 14:02:36.125539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.787 [2024-12-05 14:02:36.125548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.787 [2024-12-05 14:02:36.135992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.787 [2024-12-05 14:02:36.136018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.787 [2024-12-05 14:02:36.136026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.787 [2024-12-05 14:02:36.145633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.787 [2024-12-05 14:02:36.145654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.145662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.154617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.154638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.154646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.163910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.163930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.163938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.174938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.174958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.174966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.183435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.183455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.183463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.194662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.194683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.194691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.205432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.205453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.205461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.213602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.213622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.213631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.225121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.225142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.225151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.233729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.233750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.233758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.245298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.245318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.245326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.257055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.257076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.257084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.264523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.264544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.264554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.275774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.275795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.275804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.286384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.286405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.286414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.296128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.296149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.296157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.306241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.306261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.306273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.314793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.314813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.314821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.323578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.323599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.323607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.332847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.332866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.332874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.343621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.343641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.343649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.351575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.351595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.351603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.362051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.362071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.362080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:53.788 [2024-12-05 14:02:36.370411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:53.788 [2024-12-05 14:02:36.370431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.788 [2024-12-05 14:02:36.370440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.380537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.380557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.380565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.389158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.389178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.389186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.398569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.398589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.398597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.408286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.408306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.408315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.418332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.418363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.418379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.426285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.426306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.426314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.437892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.437912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.437920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.450487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.450508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.450516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.458995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.459015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.459023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.471191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.471211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.471223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.479437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.479457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.479465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.489706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.489725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.489733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.499511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.499532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.499540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.510067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.510088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.510096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.519917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.519938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.519946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.528089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.528110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.528117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.539969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.539989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.539997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.552265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.552287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.552295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.562412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.562435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.562444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.570646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.570666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.570674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.583613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.048 [2024-12-05 14:02:36.583633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.048 [2024-12-05 14:02:36.583641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.048 [2024-12-05 14:02:36.591608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.049 [2024-12-05 14:02:36.591628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.049 [2024-12-05 14:02:36.591636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.049 [2024-12-05 14:02:36.603180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.049 [2024-12-05 14:02:36.603200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.049 [2024-12-05 14:02:36.603208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.049 [2024-12-05 14:02:36.616067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.049 [2024-12-05 14:02:36.616088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.049 [2024-12-05 14:02:36.616096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.049 [2024-12-05 14:02:36.628434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.049 [2024-12-05 14:02:36.628454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.049 [2024-12-05 14:02:36.628462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.641000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.641019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.641027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.653690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.653710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.653718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.663886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.663906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.663914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.675195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.675214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.675222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.685910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.685930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.685939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.696769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.696788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.696796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.705159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.705178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.705186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.715138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.715158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.715166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.726959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.726979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.726987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.735006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.735025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.735034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.747110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.747130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.747142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.758324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.758345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.758354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.766423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.766444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.766452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.778207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.778228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.778236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.789293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.789312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.789320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.801229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.801249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.801258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.809442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.809463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.809471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.820880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.820900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.820908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.832940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.832959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.308 [2024-12-05 14:02:36.832967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.308 [2024-12-05 14:02:36.840553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.308 [2024-12-05 14:02:36.840577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.309 [2024-12-05 14:02:36.840586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.309 [2024-12-05 14:02:36.852639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.309 [2024-12-05 14:02:36.852659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.309 [2024-12-05 14:02:36.852667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.309 [2024-12-05 14:02:36.864548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.309 [2024-12-05 14:02:36.864568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.309 [2024-12-05 14:02:36.864576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.309 [2024-12-05 14:02:36.876213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.309 [2024-12-05 14:02:36.876233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.309 [2024-12-05 14:02:36.876241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.309 [2024-12-05 14:02:36.884195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.309 [2024-12-05 14:02:36.884216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.309 [2024-12-05 14:02:36.884224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.569 [2024-12-05 14:02:36.895826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.569 [2024-12-05 14:02:36.895846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.569 [2024-12-05 14:02:36.895854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.569 [2024-12-05 14:02:36.907832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.569 [2024-12-05 14:02:36.907852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.569 [2024-12-05 14:02:36.907861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.569 [2024-12-05 14:02:36.917806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.569 [2024-12-05 14:02:36.917827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.569 [2024-12-05 14:02:36.917835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.569 [2024-12-05 14:02:36.927427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.569 [2024-12-05 14:02:36.927448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.569 [2024-12-05 14:02:36.927462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.569 [2024-12-05 14:02:36.936784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.569 [2024-12-05 14:02:36.936805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.569 [2024-12-05 14:02:36.936813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.569 [2024-12-05 14:02:36.945229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.569 [2024-12-05 14:02:36.945249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.569 [2024-12-05 14:02:36.945257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.569 [2024-12-05 14:02:36.956870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.569 [2024-12-05 14:02:36.956891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.569 [2024-12-05 14:02:36.956899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.569 [2024-12-05 14:02:36.969086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.569 [2024-12-05 14:02:36.969106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.569 [2024-12-05 14:02:36.969114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.569 [2024-12-05 14:02:36.981422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.569 [2024-12-05 14:02:36.981447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.569 [2024-12-05 14:02:36.981455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.569 [2024-12-05 14:02:36.992420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.569 [2024-12-05 14:02:36.992440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.569 [2024-12-05 14:02:36.992448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.569 [2024-12-05 14:02:37.004845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.569 [2024-12-05 14:02:37.004866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.004874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.570 [2024-12-05 14:02:37.013502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.570 [2024-12-05 14:02:37.013522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.013530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.570 24429.00 IOPS, 95.43 MiB/s [2024-12-05T13:02:37.157Z] [2024-12-05 14:02:37.026559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.570 [2024-12-05 14:02:37.026584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.026593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.570 [2024-12-05 14:02:37.037637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.570 [2024-12-05 14:02:37.037657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.037665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.570 [2024-12-05 14:02:37.046181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.570 [2024-12-05 14:02:37.046201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.046209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.570 [2024-12-05 14:02:37.057908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.570 [2024-12-05 14:02:37.057928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.057936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.570 [2024-12-05 14:02:37.066531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.570 [2024-12-05 14:02:37.066550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.066559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.570 [2024-12-05 14:02:37.079558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.570 [2024-12-05 14:02:37.079578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.079587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.570 [2024-12-05 14:02:37.091092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.570 [2024-12-05 14:02:37.091113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.091121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.570 [2024-12-05 14:02:37.103361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.570 [2024-12-05 14:02:37.103386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.103395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.570 [2024-12-05 14:02:37.111538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.570 [2024-12-05 14:02:37.111558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.111566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.570 [2024-12-05 14:02:37.123285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.570 [2024-12-05 14:02:37.123305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.123313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.570 [2024-12-05 14:02:37.134556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.570 [2024-12-05 14:02:37.134577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.134585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.570 [2024-12-05 14:02:37.147521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.570 [2024-12-05 14:02:37.147542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.570 [2024-12-05 14:02:37.147550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.830 [2024-12-05 14:02:37.160095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.830 [2024-12-05 14:02:37.160116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.830 [2024-12-05 14:02:37.160124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.830 [2024-12-05 14:02:37.170850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.830 [2024-12-05 14:02:37.170869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.830 [2024-12-05 14:02:37.170877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.830 [2024-12-05 14:02:37.179489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.830 [2024-12-05 14:02:37.179510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.830 [2024-12-05 14:02:37.179518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.830 [2024-12-05 14:02:37.192731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.830 [2024-12-05 14:02:37.192752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.830 [2024-12-05 14:02:37.192760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.830 [2024-12-05 14:02:37.205063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.830 [2024-12-05 14:02:37.205083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.830 [2024-12-05 14:02:37.205091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.830 [2024-12-05 14:02:37.216134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.830 [2024-12-05 14:02:37.216154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.830 [2024-12-05 14:02:37.216166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.830 [2024-12-05 14:02:37.224395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.830 [2024-12-05 14:02:37.224421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.830 [2024-12-05 14:02:37.224430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.830 [2024-12-05 14:02:37.233997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.234017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.234025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.244428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.244448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.244457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.252531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.252550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.252559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.263254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.263277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.263285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.274217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.274238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.274247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.283445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.283466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.283474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.294862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.294885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.294893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.307417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.307439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.307447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.319283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.319304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.319313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.328479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.328498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.328506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.336965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.336986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.336994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.346927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.346948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.346956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.356225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.356246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.356254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.365240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.365262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.365271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.373993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.374014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.374022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.383655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.383675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.383687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.392750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.392771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.392779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.401671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.401691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.401701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:54.831 [2024-12-05 14:02:37.410570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:54.831 [2024-12-05 14:02:37.410591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.831 [2024-12-05 14:02:37.410599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.421150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.421171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.421179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.430341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.430362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.430375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.439575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.439597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.439605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.448698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.448719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.448728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.457800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.457820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.457829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.466913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.466938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.466946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.476028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.476049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.476058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.485151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.485171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.485180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.495065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.495085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.495093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.504231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.504252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.504261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.513342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.513363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.513379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.522488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.522508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.522516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.531598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.531620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.531628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.540716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.091 [2024-12-05 14:02:37.540737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.091 [2024-12-05 14:02:37.540745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.091 [2024-12-05 14:02:37.549825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.549845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.549853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.092 [2024-12-05 14:02:37.558928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.558949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.558957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.092 [2024-12-05 14:02:37.567434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.567455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.567463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.092 [2024-12-05 14:02:37.577489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.577509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.577517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.092 [2024-12-05 14:02:37.588547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.588567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.588575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.092 [2024-12-05 14:02:37.599026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.599046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.599054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.092 [2024-12-05 14:02:37.611037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.611057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.611065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.092 [2024-12-05 14:02:37.620944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.620965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.620973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.092 [2024-12-05 14:02:37.629627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.629647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.629658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.092 [2024-12-05 14:02:37.639350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.639377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.639385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.092 [2024-12-05 14:02:37.647789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.647808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.647817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.092 [2024-12-05 14:02:37.657062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.657082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.657090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.092 [2024-12-05 14:02:37.666405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.666425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.666434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.092 [2024-12-05 14:02:37.675820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.092 [2024-12-05 14:02:37.675842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.092 [2024-12-05 14:02:37.675850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.685288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.685309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.685318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.694427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.694447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.694455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.702611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.702631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.702639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.712579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.712605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.712614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.723763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.723783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.723792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.732038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.732058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.732066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.742657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.742677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.742685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.752287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.752306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.752314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.760691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.760711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.760720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.770375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.770394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.770403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.779675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.779695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.779703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.789184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.789204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.789212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.797488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.797509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.797516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.808994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.809014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.809022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.816972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.816991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.816999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.827507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.827527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.352 [2024-12-05 14:02:37.827535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.352 [2024-12-05 14:02:37.840088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.352 [2024-12-05 14:02:37.840108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.353 [2024-12-05 14:02:37.840116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.353 [2024-12-05 14:02:37.852528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.353 [2024-12-05 14:02:37.852547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.353 [2024-12-05 14:02:37.852555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.353 [2024-12-05 14:02:37.863351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.353 [2024-12-05 14:02:37.863376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.353 [2024-12-05 14:02:37.863385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.353 [2024-12-05 14:02:37.873261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.353 [2024-12-05 14:02:37.873281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.353 [2024-12-05 14:02:37.873289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.353 [2024-12-05 14:02:37.881548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.353 [2024-12-05 14:02:37.881572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.353 [2024-12-05 14:02:37.881580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.353 [2024-12-05 14:02:37.891352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.353 [2024-12-05 14:02:37.891377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.353 [2024-12-05 14:02:37.891385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.353 [2024-12-05 14:02:37.899481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.353 [2024-12-05 14:02:37.899501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.353 [2024-12-05 14:02:37.899510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.353 [2024-12-05 14:02:37.909146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.353 [2024-12-05 14:02:37.909166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.353 [2024-12-05 14:02:37.909174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.353 [2024-12-05 14:02:37.917966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.353 [2024-12-05 14:02:37.917986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.353 [2024-12-05 14:02:37.917994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.353 [2024-12-05 14:02:37.927929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.353 [2024-12-05 14:02:37.927948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.353 [2024-12-05 14:02:37.927956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.353 [2024-12-05 14:02:37.937130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.353 [2024-12-05 14:02:37.937150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.353 [2024-12-05 14:02:37.937159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.611 [2024-12-05 14:02:37.946176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.611 [2024-12-05 14:02:37.946197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.612 [2024-12-05 14:02:37.946205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.612 [2024-12-05 14:02:37.955598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.612 [2024-12-05 14:02:37.955620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.612 [2024-12-05 14:02:37.955628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.612 [2024-12-05 14:02:37.965020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.612 [2024-12-05 14:02:37.965040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.612 [2024-12-05 14:02:37.965049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.612 [2024-12-05 14:02:37.973769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.612 [2024-12-05 14:02:37.973788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.612 [2024-12-05 14:02:37.973797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.612 [2024-12-05 14:02:37.984142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.612 [2024-12-05 14:02:37.984162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.612 [2024-12-05 14:02:37.984170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.612 [2024-12-05 14:02:37.993306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.612 [2024-12-05 14:02:37.993326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.612 [2024-12-05 14:02:37.993334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.612 [2024-12-05 14:02:38.001242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.612 [2024-12-05 14:02:38.001262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.612 [2024-12-05 14:02:38.001270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.612 [2024-12-05 14:02:38.010708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.612 [2024-12-05 14:02:38.010728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.612 [2024-12-05 14:02:38.010736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.612 [2024-12-05 14:02:38.020641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.612 [2024-12-05 14:02:38.020662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.612 [2024-12-05 14:02:38.020670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.612 25162.00 IOPS, 98.29 MiB/s [2024-12-05T13:02:38.199Z] [2024-12-05 14:02:38.028870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x152b6b0) 00:30:55.612 [2024-12-05 14:02:38.028890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.612 [2024-12-05 14:02:38.028898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.612 00:30:55.612 Latency(us) 00:30:55.612 [2024-12-05T13:02:38.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.612 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:55.612 nvme0n1 : 2.00 25191.05 98.40 0.00 0.00 5074.25 2278.16 18100.42 00:30:55.612 [2024-12-05T13:02:38.199Z] =================================================================================================================== 00:30:55.612 [2024-12-05T13:02:38.199Z] Total : 25191.05 98.40 0.00 0.00 5074.25 2278.16 18100.42 00:30:55.612 { 00:30:55.612 "results": [ 00:30:55.612 { 00:30:55.612 "job": "nvme0n1", 00:30:55.612 "core_mask": "0x2", 00:30:55.612 "workload": "randread", 00:30:55.612 "status": "finished", 00:30:55.612 "queue_depth": 128, 00:30:55.612 "io_size": 4096, 00:30:55.612 "runtime": 2.004323, 00:30:55.612 "iops": 25191.049546405444, 00:30:55.612 "mibps": 98.40253729064626, 00:30:55.612 "io_failed": 0, 00:30:55.612 "io_timeout": 0, 00:30:55.612 "avg_latency_us": 5074.253875777956, 00:30:55.612 "min_latency_us": 2278.1561904761907, 00:30:55.612 "max_latency_us": 18100.41904761905 00:30:55.612 } 00:30:55.612 ], 00:30:55.612 "core_count": 1 00:30:55.612 } 00:30:55.612 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:55.612 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:55.612 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:55.612 | .driver_specific 00:30:55.612 | .nvme_error 00:30:55.612 | .status_code 00:30:55.612 | .command_transient_transport_error' 00:30:55.612 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:55.871 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 198 > 0 )) 00:30:55.871 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 814764 00:30:55.871 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 814764 ']' 00:30:55.871 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 814764 00:30:55.871 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:55.871 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:55.871 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 814764 00:30:55.871 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:55.871 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:55.871 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 814764' 00:30:55.871 killing process with pid 814764 00:30:55.871 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 814764 00:30:55.871 Received shutdown signal, test time was about 2.000000 seconds 00:30:55.871 00:30:55.871 Latency(us) 00:30:55.871 [2024-12-05T13:02:38.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.872 [2024-12-05T13:02:38.459Z] =================================================================================================================== 00:30:55.872 [2024-12-05T13:02:38.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:55.872 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 814764 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=815241 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 815241 /var/tmp/bperf.sock 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 815241 ']' 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:56.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:56.148 [2024-12-05 14:02:38.505504] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:30:56.148 [2024-12-05 14:02:38.505552] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815241 ] 00:30:56.148 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:56.148 Zero copy mechanism will not be used. 00:30:56.148 [2024-12-05 14:02:38.579488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.148 [2024-12-05 14:02:38.621735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:56.148 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:56.407 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:56.407 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.407 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:56.407 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.407 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:56.407 14:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:56.976 nvme0n1 00:30:56.976 14:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:56.976 14:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.976 14:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:56.976 14:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.976 14:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:56.976 14:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:56.976 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:56.976 Zero copy mechanism will not be used. 00:30:56.976 Running I/O for 2 seconds... 00:30:56.976 [2024-12-05 14:02:39.462553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.462585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.462595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.467862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.467887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.467896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.473341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.473364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.473378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.478721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.478743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.478752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.483988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.484011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.484019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.489252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.489273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.489282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.494504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.494526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.494534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.499574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.499596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.499605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.504824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.504846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.504854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.510034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.510058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.510066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.515165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.515187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.515196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.520351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.520380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.520389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.525476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.525497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.525506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.530502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.530522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.530531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.535440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.535462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.535470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.540281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.540301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.540309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.545380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.545400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.545413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.550503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.550523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.550531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.555625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.555645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.555653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:56.976 [2024-12-05 14:02:39.560853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:56.976 [2024-12-05 14:02:39.560873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.976 [2024-12-05 14:02:39.560881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.566045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.566065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.237 [2024-12-05 14:02:39.566073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.571259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.571279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.237 [2024-12-05 14:02:39.571287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.576398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.576418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.237 [2024-12-05 14:02:39.576426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.581510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.581530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.237 [2024-12-05 14:02:39.581538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.586609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.586629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.237 [2024-12-05 14:02:39.586637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.591708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.591732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.237 [2024-12-05 14:02:39.591740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.596698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.596720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.237 [2024-12-05 14:02:39.596729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.601878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.601898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.237 [2024-12-05 14:02:39.601907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.606960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.606981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.237 [2024-12-05 14:02:39.606989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.612050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.612072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.237 [2024-12-05 14:02:39.612081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.617344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.617364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.237 [2024-12-05 14:02:39.617380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.622424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.622444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.237 [2024-12-05 14:02:39.622452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.627499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.627519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.237 [2024-12-05 14:02:39.627527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.237 [2024-12-05 14:02:39.632588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.237 [2024-12-05 14:02:39.632608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.632617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.637630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.637651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.637658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.642834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.642853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.642862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.647872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.647893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.647901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.653001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.653022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.653030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.658175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.658195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.658204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.663286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.663306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.663314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.668433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.668453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.668461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.673565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.673585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.673592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.678654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.678674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.678688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.683727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.683747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.683756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.688762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.688782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.688792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.693846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.693866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.693874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.698916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.698936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.698944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.703981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.704001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.704009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.709040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.709060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.709068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.714134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.714154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.714162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.719879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.719900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.719908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.724982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.725007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.725016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.730226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.730246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.730254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.238 [2024-12-05 14:02:39.735325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.238 [2024-12-05 14:02:39.735345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.238 [2024-12-05 14:02:39.735353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.740411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.740431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.740439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.745485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.745505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.745513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.750599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.750619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.750627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.755700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.755720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.755728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.760778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.760798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.760806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.765830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.765850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.765859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.771070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.771090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.771098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.776223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.776243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.776251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.781315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.781335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.781343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.786394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.786414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.786423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.791372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.791394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.791402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.796945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.796966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.796975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.802743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.802763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.802771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.808888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.808910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.808918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.239 [2024-12-05 14:02:39.816331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.239 [2024-12-05 14:02:39.816357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.239 [2024-12-05 14:02:39.816371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.823749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.823772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.823780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.831381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.831403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.831412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.838760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.838783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.838792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.846720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.846743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.846752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.854149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.854173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.854181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.861984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.862007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.862015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.870209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.870232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.870241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.878263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.878286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.878296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.885869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.885893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.885902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.893484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.893506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.893515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.900797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.900819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.900828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.908002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.908023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.908032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.915127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.915147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.915156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.921619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.921641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.921649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.926818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.926838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.926847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.931983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.932004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.932012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.937084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.937105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.937117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.942258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.942278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.942288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.947400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.947421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.947429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.952590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.952611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.952619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.957817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.957839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.957847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.963079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.963099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.963107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.968280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.968300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.968308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.973617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.973638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.973647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.978904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.500 [2024-12-05 14:02:39.978925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.500 [2024-12-05 14:02:39.978933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.500 [2024-12-05 14:02:39.983903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:39.983927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:39.983935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:39.988928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:39.988949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:39.988957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:39.994040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:39.994062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:39.994070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:39.999098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:39.999120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:39.999128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.004203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.004225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.004233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.009229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.009252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.009260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.014983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.015007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.015017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.020989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.021009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.021018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.025904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.025925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.025933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.030861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.030881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.030889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.035913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.035935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.035944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.041004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.041026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.041035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.046892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.046913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.046921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.052081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.052102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.052111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.057320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.057340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.057349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.062528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.062549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.062557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.067692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.067713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.067722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.073018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.073039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.073051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.078202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.078222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.078231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.501 [2024-12-05 14:02:40.083382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.501 [2024-12-05 14:02:40.083403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.501 [2024-12-05 14:02:40.083413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.088690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.088710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.088719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.094305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.094325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.094334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.099914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.099934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.099942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.105169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.105191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.105199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.110345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.110372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.110380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.115475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.115496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.115504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.120721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.120742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.120750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.125893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.125914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.125923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.131110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.131133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.131141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.136488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.136509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.136518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.141637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.141659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.141667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.146846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.146868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.146877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.152149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.152169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.152177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.157350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.157379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.157388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.162673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.162694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.162706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.167938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.167961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.167970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.173360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.173387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.173396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.178570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.178593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.178603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.183752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.183774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.183782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.189062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.189085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.189093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.194393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.194414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.194423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.199709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.199731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.199739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.762 [2024-12-05 14:02:40.204746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.762 [2024-12-05 14:02:40.204769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.762 [2024-12-05 14:02:40.204777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.209934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.209959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.209968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.215096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.215117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.215126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.220410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.220430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.220439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.225739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.225760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.225768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.231422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.231444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.231453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.237600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.237622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.237630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.244084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.244106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.244114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.251294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.251317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.251325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.258027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.258049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.258057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.264338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.264361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.264377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.270636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.270659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.270668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.276805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.276828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.276837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.282562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.282585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.282594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.290088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.290111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.290119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.297211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.297235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.297244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.303803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.303825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.303833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.311008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.311030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.311039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.318732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.318755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.318767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.326419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.326442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.326451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.333798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.333819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.333828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.339897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.339919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.339927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:57.763 [2024-12-05 14:02:40.345252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:57.763 [2024-12-05 14:02:40.345274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.763 [2024-12-05 14:02:40.345283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.023 [2024-12-05 14:02:40.350520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.023 [2024-12-05 14:02:40.350541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.023 [2024-12-05 14:02:40.350550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.023 [2024-12-05 14:02:40.355774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.023 [2024-12-05 14:02:40.355796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.023 [2024-12-05 14:02:40.355803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.023 [2024-12-05 14:02:40.361276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.023 [2024-12-05 14:02:40.361298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.023 [2024-12-05 14:02:40.361306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.023 [2024-12-05 14:02:40.367045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.023 [2024-12-05 14:02:40.367066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.023 [2024-12-05 14:02:40.367075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.023 [2024-12-05 14:02:40.372475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.023 [2024-12-05 14:02:40.372501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.023 [2024-12-05 14:02:40.372509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.023 [2024-12-05 14:02:40.377735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.377757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.377766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.383048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.383070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.383078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.388288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.388310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.388318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.393542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.393564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.393572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.398815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.398838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.398846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.404086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.404107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.404116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.409339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.409360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.409385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.414670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.414691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.414699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.419761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.419783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.419792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.425052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.425073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.425081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.430326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.430348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.430356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.435600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.435622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.435630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.441037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.441059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.441068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.446295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.446317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.446325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.451567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.451589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.451597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.024 5557.00 IOPS, 694.62 MiB/s [2024-12-05T13:02:40.611Z] [2024-12-05 14:02:40.457986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.458009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.458018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.463904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.463926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.463937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.469826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.469848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.469857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.475095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.475117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.475125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.480512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.480534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.480543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.485832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.485853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.485861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.491112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.491134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.491142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.496446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.496467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.496476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.501786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.501806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.501815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.507034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.507054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.507062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.512257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.512278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.512287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.024 [2024-12-05 14:02:40.517578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.024 [2024-12-05 14:02:40.517601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.024 [2024-12-05 14:02:40.517609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.521287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.521309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.521317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.524685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.524706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.524715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.529617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.529638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.529646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.534648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.534672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.534680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.539651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.539672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.539680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.544632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.544653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.544660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.549760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.549781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.549792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.555023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.555043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.555052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.560291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.560313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.560321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.565545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.565565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.565574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.570706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.570726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.570734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.575987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.576006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.576014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.581266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.581286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.581295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.586535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.586555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.586564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.591798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.591819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.591827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.597395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.597419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.597429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.602241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.602262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.602271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.025 [2024-12-05 14:02:40.607757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.025 [2024-12-05 14:02:40.607779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.025 [2024-12-05 14:02:40.607788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.285 [2024-12-05 14:02:40.613642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.285 [2024-12-05 14:02:40.613665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.285 [2024-12-05 14:02:40.613673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.619030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.619052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.619061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.624442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.624464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.624472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.629800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.629821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.629831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.635223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.635245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.635254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.640491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.640513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.640522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.645803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.645826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.645834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.651081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.651102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.651111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.656300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.656321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.656329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.661608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.661630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.661638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.666884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.666906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.666914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.672185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.672206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.672214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.677566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.677587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.677595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.683060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.683082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.683091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.688445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.688466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.688479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.693669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.693690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.693698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.699165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.699187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.699195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.704402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.704423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.704431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.709650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.709672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.709680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.715146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.715169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.715177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.720727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.720748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.720755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.727274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.727296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.727305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.732643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.732665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.732674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.736495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.736521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.736530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.740891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.740914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.740922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.746358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.746387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.746396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.751988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.752010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.286 [2024-12-05 14:02:40.752019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.286 [2024-12-05 14:02:40.757626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.286 [2024-12-05 14:02:40.757647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.757655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.763198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.763220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.763229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.768657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.768677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.768685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.774215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.774236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.774245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.779841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.779863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.779871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.785458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.785479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.785488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.790557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.790578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.790587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.795965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.795987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.795995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.801144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.801166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.801174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.805878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.805899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.805907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.811607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.811629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.811637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.817617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.817639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.817647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.823765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.823788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.823797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.829921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.829944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.829956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.833541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.833562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.833570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.838051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.838073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.838081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.843581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.843603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.843611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.849996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.850018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.850027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.856111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.856133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.856141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.861575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.861597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.861605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.287 [2024-12-05 14:02:40.867192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.287 [2024-12-05 14:02:40.867214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.287 [2024-12-05 14:02:40.867223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.872570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.872602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.872611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.878141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.878166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.878174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.883575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.883596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.883605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.889135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.889156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.889165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.894587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.894607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.894616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.899946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.899968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.899977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.905563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.905585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.905593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.911204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.911225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.911234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.916444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.916466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.916474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.921978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.922000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.922008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.927321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.927342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.927351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.932535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.932557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.932565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.937815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.937837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.937845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.942967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.942988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.942997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.948346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.948372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.948381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.953893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.953915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.953923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.959394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.959416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.959424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.964849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.964871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.964879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.970291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.970313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.970325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.976175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.976202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.976211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.981554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.981575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.981583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.987082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.987103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.987111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.992603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.992625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.992633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:40.998070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:40.998091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:40.998099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:41.002732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.548 [2024-12-05 14:02:41.002754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.548 [2024-12-05 14:02:41.002762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.548 [2024-12-05 14:02:41.005783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.005804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.005812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.011160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.011180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.011189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.016685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.016710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.016719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.021937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.021958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.021966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.027049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.027069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.027077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.032568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.032591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.032599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.038152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.038172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.038180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.043603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.043624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.043632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.049127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.049148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.049156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.054328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.054349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.054357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.059497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.059517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.059525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.064924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.064945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.064953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.070329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.070350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.070358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.075753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.075774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.075782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.080718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.080739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.080748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.085751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.085773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.085782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.090609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.090631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.090639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.095756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.095778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.095786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.100926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.100947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.100956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.106041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.106062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.106076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.111301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.111322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.111330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.116742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.116764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.116772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.122393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.122415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.122423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.549 [2024-12-05 14:02:41.127673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.549 [2024-12-05 14:02:41.127695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.549 [2024-12-05 14:02:41.127703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.809 [2024-12-05 14:02:41.132823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.809 [2024-12-05 14:02:41.132845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.809 [2024-12-05 14:02:41.132855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.809 [2024-12-05 14:02:41.138024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.809 [2024-12-05 14:02:41.138046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.809 [2024-12-05 14:02:41.138054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.809 [2024-12-05 14:02:41.143256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.809 [2024-12-05 14:02:41.143278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.809 [2024-12-05 14:02:41.143286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.809 [2024-12-05 14:02:41.148462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.809 [2024-12-05 14:02:41.148484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.809 [2024-12-05 14:02:41.148492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.809 [2024-12-05 14:02:41.153723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.809 [2024-12-05 14:02:41.153745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.809 [2024-12-05 14:02:41.153754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.809 [2024-12-05 14:02:41.159143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.159165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.159173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.164242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.164262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.164270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.169440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.169462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.169470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.174774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.174795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.174803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.177800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.177821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.177829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.183381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.183402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.183410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.188904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.188927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.188935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.194409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.194430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.194441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.199739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.199760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.199768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.205171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.205192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.205200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.210644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.210666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.210674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.216462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.216483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.216491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.221861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.221882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.221890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.227259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.227281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.227289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.232482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.232503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.232511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.237997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.238019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.238027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.243461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.243486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.243494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.248320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.248342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.248350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.253674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.253695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.253703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.259088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.259110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.259118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.264601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.264623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.264631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.270090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.270112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.270120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.275474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.275495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.275503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.280835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.280856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.280864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.286154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.286176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.286184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.291576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.291598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.291605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.810 [2024-12-05 14:02:41.297018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.810 [2024-12-05 14:02:41.297039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.810 [2024-12-05 14:02:41.297047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.302410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.302430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.302439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.307722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.307743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.307751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.313249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.313271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.313280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.319095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.319117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.319126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.325138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.325160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.325168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.330375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.330396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.330404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.335684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.335706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.335718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.341142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.341164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.341173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.346583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.346605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.346613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.351976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.351997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.352005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.356997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.357019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.357026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.362480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.362502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.362510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.367820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.367842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.367849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.373166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.373186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.373195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.378949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.378971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.378979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.384238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.384262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.384271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:58.811 [2024-12-05 14:02:41.389541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:58.811 [2024-12-05 14:02:41.389562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.811 [2024-12-05 14:02:41.389570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:59.070 [2024-12-05 14:02:41.395017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:59.070 [2024-12-05 14:02:41.395039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.070 [2024-12-05 14:02:41.395048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.070 [2024-12-05 14:02:41.400391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:59.070 [2024-12-05 14:02:41.400412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.070 [2024-12-05 14:02:41.400421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:59.070 [2024-12-05 14:02:41.405778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:59.070 [2024-12-05 14:02:41.405800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.070 [2024-12-05 14:02:41.405808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:59.070 [2024-12-05 14:02:41.411152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:59.070 [2024-12-05 14:02:41.411174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.070 [2024-12-05 14:02:41.411182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:59.070 [2024-12-05 14:02:41.416463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:59.070 [2024-12-05 14:02:41.416484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.070 [2024-12-05 14:02:41.416492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.070 [2024-12-05 14:02:41.421849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:59.070 [2024-12-05 14:02:41.421869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.070 [2024-12-05 14:02:41.421878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:59.070 [2024-12-05 14:02:41.427218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:59.070 [2024-12-05 14:02:41.427239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.070 [2024-12-05 14:02:41.427248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:59.070 [2024-12-05 14:02:41.432414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:59.070 [2024-12-05 14:02:41.432435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.070 [2024-12-05 14:02:41.432443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:59.070 [2024-12-05 14:02:41.437523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:59.070 [2024-12-05 14:02:41.437545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.070 [2024-12-05 14:02:41.437553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.070 [2024-12-05 14:02:41.442714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:59.070 [2024-12-05 14:02:41.442736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.070 [2024-12-05 14:02:41.442744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:59.070 [2024-12-05 14:02:41.447730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:59.070 [2024-12-05 14:02:41.447751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.070 [2024-12-05 14:02:41.447759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:59.070 [2024-12-05 14:02:41.452904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:59.070 [2024-12-05 14:02:41.452925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.070 [2024-12-05 14:02:41.452933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:59.070 5702.50 IOPS, 712.81 MiB/s [2024-12-05T13:02:41.657Z] [2024-12-05 14:02:41.458640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ec1a0) 00:30:59.070 [2024-12-05 14:02:41.458662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.070 [2024-12-05 14:02:41.458670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.070 00:30:59.070 Latency(us) 00:30:59.070 [2024-12-05T13:02:41.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.070 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:59.070 nvme0n1 : 2.00 5701.12 712.64 0.00 0.00 2803.45 612.45 9050.21 00:30:59.070 [2024-12-05T13:02:41.657Z] =================================================================================================================== 00:30:59.070 [2024-12-05T13:02:41.657Z] Total : 5701.12 712.64 0.00 0.00 2803.45 612.45 9050.21 00:30:59.070 { 00:30:59.070 "results": [ 00:30:59.070 { 00:30:59.070 "job": "nvme0n1", 00:30:59.070 "core_mask": "0x2", 00:30:59.070 "workload": "randread", 00:30:59.070 "status": "finished", 00:30:59.070 "queue_depth": 16, 00:30:59.070 "io_size": 131072, 00:30:59.070 "runtime": 2.003289, 00:30:59.070 "iops": 5701.124500758503, 00:30:59.070 "mibps": 712.6405625948129, 00:30:59.070 "io_failed": 0, 00:30:59.070 "io_timeout": 0, 00:30:59.070 "avg_latency_us": 2803.4477267856623, 00:30:59.070 "min_latency_us": 612.4495238095238, 00:30:59.070 "max_latency_us": 9050.209523809524 00:30:59.070 } 00:30:59.070 ], 00:30:59.070 "core_count": 1 00:30:59.070 } 00:30:59.070 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:59.070 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:59.070 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:59.070 | .driver_specific 00:30:59.070 | .nvme_error 00:30:59.070 | .status_code 00:30:59.070 | .command_transient_transport_error' 00:30:59.070 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 368 > 0 )) 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 815241 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 815241 ']' 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 815241 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 815241 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 815241' 00:30:59.329 killing process with pid 815241 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 815241 00:30:59.329 Received shutdown signal, test time was about 2.000000 seconds 00:30:59.329 00:30:59.329 Latency(us) 00:30:59.329 [2024-12-05T13:02:41.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.329 [2024-12-05T13:02:41.916Z] =================================================================================================================== 00:30:59.329 [2024-12-05T13:02:41.916Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 815241 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=815896 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 815896 /var/tmp/bperf.sock 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 815896 ']' 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:59.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:59.329 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:59.330 14:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:59.589 [2024-12-05 14:02:41.950897] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:30:59.589 [2024-12-05 14:02:41.950943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815896 ] 00:30:59.589 [2024-12-05 14:02:42.024257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.589 [2024-12-05 14:02:42.064494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.589 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.589 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:59.589 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:59.589 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:59.848 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:59.848 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.849 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:59.849 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.849 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.849 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:00.418 nvme0n1 00:31:00.418 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:00.418 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.418 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:00.418 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.418 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:00.418 14:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:00.418 Running I/O for 2 seconds... 00:31:00.418 [2024-12-05 14:02:42.875317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.418 [2024-12-05 14:02:42.875453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.418 [2024-12-05 14:02:42.875481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.418 [2024-12-05 14:02:42.884844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.418 [2024-12-05 14:02:42.884963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.418 [2024-12-05 14:02:42.884983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.418 [2024-12-05 14:02:42.894300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.418 [2024-12-05 14:02:42.894426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.418 [2024-12-05 14:02:42.894445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.418 [2024-12-05 14:02:42.903716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.418 [2024-12-05 14:02:42.903833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.418 [2024-12-05 14:02:42.903852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.418 [2024-12-05 14:02:42.913121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.418 [2024-12-05 14:02:42.913235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.418 [2024-12-05 14:02:42.913253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.418 [2024-12-05 14:02:42.922498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.418 [2024-12-05 14:02:42.922616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.418 [2024-12-05 14:02:42.922634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.418 [2024-12-05 14:02:42.931847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.418 [2024-12-05 14:02:42.931964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.418 [2024-12-05 14:02:42.931982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.418 [2024-12-05 14:02:42.941207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.418 [2024-12-05 14:02:42.941322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.418 [2024-12-05 14:02:42.941340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.418 [2024-12-05 14:02:42.950553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.418 [2024-12-05 14:02:42.950667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.418 [2024-12-05 14:02:42.950685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.418 [2024-12-05 14:02:42.959924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.418 [2024-12-05 14:02:42.960040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.418 [2024-12-05 14:02:42.960058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.418 [2024-12-05 14:02:42.969279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.418 [2024-12-05 14:02:42.969402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.418 [2024-12-05 14:02:42.969425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.419 [2024-12-05 14:02:42.978626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.419 [2024-12-05 14:02:42.978743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.419 [2024-12-05 14:02:42.978762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.419 [2024-12-05 14:02:42.987972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.419 [2024-12-05 14:02:42.988082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.419 [2024-12-05 14:02:42.988099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.419 [2024-12-05 14:02:42.997253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.419 [2024-12-05 14:02:42.997373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.419 [2024-12-05 14:02:42.997392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.006896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.678 [2024-12-05 14:02:43.007014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.678 [2024-12-05 14:02:43.007033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.016295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.678 [2024-12-05 14:02:43.016425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.678 [2024-12-05 14:02:43.016443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.025685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.678 [2024-12-05 14:02:43.025799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.678 [2024-12-05 14:02:43.025817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.035287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.678 [2024-12-05 14:02:43.035411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.678 [2024-12-05 14:02:43.035430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.044790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.678 [2024-12-05 14:02:43.044904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.678 [2024-12-05 14:02:43.044922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.054195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.678 [2024-12-05 14:02:43.054317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.678 [2024-12-05 14:02:43.054335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.063594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.678 [2024-12-05 14:02:43.063711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.678 [2024-12-05 14:02:43.063729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.072919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.678 [2024-12-05 14:02:43.073036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.678 [2024-12-05 14:02:43.073053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.082214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.678 [2024-12-05 14:02:43.082331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.678 [2024-12-05 14:02:43.082349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.091581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.678 [2024-12-05 14:02:43.091698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.678 [2024-12-05 14:02:43.091717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.100899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.678 [2024-12-05 14:02:43.101014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.678 [2024-12-05 14:02:43.101032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.110184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.678 [2024-12-05 14:02:43.110297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.678 [2024-12-05 14:02:43.110315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.119489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.678 [2024-12-05 14:02:43.119605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.678 [2024-12-05 14:02:43.119623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.678 [2024-12-05 14:02:43.128840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.128954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.128972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.138346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.138468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.138485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.147675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.147789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.147807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.157012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.157124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.157143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.166346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.166468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.166485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.175680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.175799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.175817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.185018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.185132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.185150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.194331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.194454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.194473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.203653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.203768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.203786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.213005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.213119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.213139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.222348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.222470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.222488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.231699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.231807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.231824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.240993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.241108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.241125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.250323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.250445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.250464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.679 [2024-12-05 14:02:43.259693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.679 [2024-12-05 14:02:43.259810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.679 [2024-12-05 14:02:43.259829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.938 [2024-12-05 14:02:43.269258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.938 [2024-12-05 14:02:43.269381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.938 [2024-12-05 14:02:43.269399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.938 [2024-12-05 14:02:43.278602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.938 [2024-12-05 14:02:43.278717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.938 [2024-12-05 14:02:43.278735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.938 [2024-12-05 14:02:43.287931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.288047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.288064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.297291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.297414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.297431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.306641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.306756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.306774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.315976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.316090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.316107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.325304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.325428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.325446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.334630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.334745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.334762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.343949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.344065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.344082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.353282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.353403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.353421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.362611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.362726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.362743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.371934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.372048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.372069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.381278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.381400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.381418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.390779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.390894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.390912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.400097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.400214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.400230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.409409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.409523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.409541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.418757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.418870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.418888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.428051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.428160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.428177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.437380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.437495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.437512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.446891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.447007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.447025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.456200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.456319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.456336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.465538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.465654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.465672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.474840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.474954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.474972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.484187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.484300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.484317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.493518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.493635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.493651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.502785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.502902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.502919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.512118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.512234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.512251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:00.939 [2024-12-05 14:02:43.521474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:00.939 [2024-12-05 14:02:43.521592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.939 [2024-12-05 14:02:43.521610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.531036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.531151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.531169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.540324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.540446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.540463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.549673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.549787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.549804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.558993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.559106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.559124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.568472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.568587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.568605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.577786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.577900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.577917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.587124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.587238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.587256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.596425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.596541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.596559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.605735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.605849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.605866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.615189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.615304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.615325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.624532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.624654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.624671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.633840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.633954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.633971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.643331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.643453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.643470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.652642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.652756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.652774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.661945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.662059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.662077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.671223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.671340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.671358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.680555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.680670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.680687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.689984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.690097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.690114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.699281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.699405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.699423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.708620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.708735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.199 [2024-12-05 14:02:43.708753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.199 [2024-12-05 14:02:43.717933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.199 [2024-12-05 14:02:43.718050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.200 [2024-12-05 14:02:43.718067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.200 [2024-12-05 14:02:43.727234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.200 [2024-12-05 14:02:43.727350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.200 [2024-12-05 14:02:43.727372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.200 [2024-12-05 14:02:43.736548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.200 [2024-12-05 14:02:43.736663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.200 [2024-12-05 14:02:43.736681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.200 [2024-12-05 14:02:43.745867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.200 [2024-12-05 14:02:43.745983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.200 [2024-12-05 14:02:43.746000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.200 [2024-12-05 14:02:43.755177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.200 [2024-12-05 14:02:43.755291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.200 [2024-12-05 14:02:43.755309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.200 [2024-12-05 14:02:43.764472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.200 [2024-12-05 14:02:43.764588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.200 [2024-12-05 14:02:43.764606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.200 [2024-12-05 14:02:43.773791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.200 [2024-12-05 14:02:43.773907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.200 [2024-12-05 14:02:43.773924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.200 [2024-12-05 14:02:43.783243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.200 [2024-12-05 14:02:43.783362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.200 [2024-12-05 14:02:43.783387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.459 [2024-12-05 14:02:43.792767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.459 [2024-12-05 14:02:43.792881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.459 [2024-12-05 14:02:43.792899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.459 [2024-12-05 14:02:43.802063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.459 [2024-12-05 14:02:43.802179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.802196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.811407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.811522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.811540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.820753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.820863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.820880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.830080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.830194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.830211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.839430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.839546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.839564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.848759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.848872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.848890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.858077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.858192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.858214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 27121.00 IOPS, 105.94 MiB/s [2024-12-05T13:02:44.047Z] [2024-12-05 14:02:43.867422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.867536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.867555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.876698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.876809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.876827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.885997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.886113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.886132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.895536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.895651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.895669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.904822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.904937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.904955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.914155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.914268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.914286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.923436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.923552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.923570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.932760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.932876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.932894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.942065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.942184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.942201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.951374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.951489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.951507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.960652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.960767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.960785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.969976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.970090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.970107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.979272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.979388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.979405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.988634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.988747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.988764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:43.997951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:43.998065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:43.998083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:44.007278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:44.007400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:44.007418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:44.016590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:44.016706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:44.016723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:44.025897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:44.026012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:44.026029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:44.035222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:44.035334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:44.035351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.460 [2024-12-05 14:02:44.044749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.460 [2024-12-05 14:02:44.044866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.460 [2024-12-05 14:02:44.044884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.720 [2024-12-05 14:02:44.054232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.720 [2024-12-05 14:02:44.054346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.720 [2024-12-05 14:02:44.054364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.720 [2024-12-05 14:02:44.063531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.720 [2024-12-05 14:02:44.063650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.720 [2024-12-05 14:02:44.063667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.720 [2024-12-05 14:02:44.072866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.720 [2024-12-05 14:02:44.072982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.720 [2024-12-05 14:02:44.072999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.720 [2024-12-05 14:02:44.082333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.720 [2024-12-05 14:02:44.082456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.720 [2024-12-05 14:02:44.082473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.720 [2024-12-05 14:02:44.091664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.720 [2024-12-05 14:02:44.091779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.720 [2024-12-05 14:02:44.091796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.720 [2024-12-05 14:02:44.100986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.720 [2024-12-05 14:02:44.101101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.720 [2024-12-05 14:02:44.101122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.720 [2024-12-05 14:02:44.110317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.720 [2024-12-05 14:02:44.110438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.720 [2024-12-05 14:02:44.110456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.720 [2024-12-05 14:02:44.119633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.720 [2024-12-05 14:02:44.119750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.720 [2024-12-05 14:02:44.119769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.720 [2024-12-05 14:02:44.128955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.720 [2024-12-05 14:02:44.129069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.720 [2024-12-05 14:02:44.129086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.720 [2024-12-05 14:02:44.138282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.720 [2024-12-05 14:02:44.138402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.138420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.147823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.147946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.147963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.157076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.157195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.157212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.166419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.166536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.166552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.175720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.175835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.175852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.185050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.185170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.185188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.194303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.194424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.194441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.203615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.203736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.203754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.212921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.213035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.213052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.222230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.222347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.222364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.231551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.231667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.231683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.240865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.240980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.240997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.250195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.250310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.250327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.259525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.259642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.259660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.268843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.268958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.268975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.278158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.278275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.278292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.287489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.287603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.287620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.721 [2024-12-05 14:02:44.296794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.721 [2024-12-05 14:02:44.296908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.721 [2024-12-05 14:02:44.296927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.980 [2024-12-05 14:02:44.306289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.980 [2024-12-05 14:02:44.306415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.980 [2024-12-05 14:02:44.306432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.980 [2024-12-05 14:02:44.315774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.980 [2024-12-05 14:02:44.315890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.980 [2024-12-05 14:02:44.315907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.980 [2024-12-05 14:02:44.325058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.980 [2024-12-05 14:02:44.325175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.980 [2024-12-05 14:02:44.325192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.980 [2024-12-05 14:02:44.334394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.980 [2024-12-05 14:02:44.334508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.980 [2024-12-05 14:02:44.334526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.980 [2024-12-05 14:02:44.343679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.980 [2024-12-05 14:02:44.343793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.980 [2024-12-05 14:02:44.343813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.980 [2024-12-05 14:02:44.353003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.980 [2024-12-05 14:02:44.353116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.980 [2024-12-05 14:02:44.353133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.980 [2024-12-05 14:02:44.362298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.980 [2024-12-05 14:02:44.362418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.980 [2024-12-05 14:02:44.362436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.980 [2024-12-05 14:02:44.371648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.980 [2024-12-05 14:02:44.371763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.980 [2024-12-05 14:02:44.371781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.980 [2024-12-05 14:02:44.380945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.980 [2024-12-05 14:02:44.381061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.980 [2024-12-05 14:02:44.381079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.980 [2024-12-05 14:02:44.390290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.980 [2024-12-05 14:02:44.390411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.980 [2024-12-05 14:02:44.390428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.980 [2024-12-05 14:02:44.399861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.980 [2024-12-05 14:02:44.399978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.980 [2024-12-05 14:02:44.399995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.980 [2024-12-05 14:02:44.409165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.409281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.409300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.418495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.418610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.418628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.427826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.427947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.427967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.437151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.437266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.437283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.446650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.446767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.446785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.456022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.456141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.456159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.465637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.465748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.465765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.474951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.475067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.475085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.484278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.484401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.484419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.493617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.493730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.493747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.502956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.503070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.503087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.512296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.512422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.512440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.521597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.521711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.521728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.530929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.531045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.531063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.540219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.540334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.540351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.549536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.549652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.549670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:01.981 [2024-12-05 14:02:44.558861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:01.981 [2024-12-05 14:02:44.558974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.981 [2024-12-05 14:02:44.558991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.568395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.568510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.568528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.577808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.577924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.577941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.587133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.587243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.587263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.596466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.596582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.596600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.605788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.605904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.605921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.615114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.615232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.615249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.624455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.624571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.624589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.633762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.633874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.633891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.642999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.643115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.643133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.652630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.652739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.652755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.661937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.662062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.662080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.671275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.671409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.671426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.680619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.680735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.680752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.689924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.690039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.690056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.699403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.699518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.241 [2024-12-05 14:02:44.699535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.241 [2024-12-05 14:02:44.708709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.241 [2024-12-05 14:02:44.708824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.242 [2024-12-05 14:02:44.708841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.242 [2024-12-05 14:02:44.718016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.242 [2024-12-05 14:02:44.718131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.242 [2024-12-05 14:02:44.718148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.242 [2024-12-05 14:02:44.727334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.242 [2024-12-05 14:02:44.727454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.242 [2024-12-05 14:02:44.727471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.242 [2024-12-05 14:02:44.736610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.242 [2024-12-05 14:02:44.736724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.242 [2024-12-05 14:02:44.736742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.242 [2024-12-05 14:02:44.745939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.242 [2024-12-05 14:02:44.746053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.242 [2024-12-05 14:02:44.746070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.242 [2024-12-05 14:02:44.755266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.242 [2024-12-05 14:02:44.755383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.242 [2024-12-05 14:02:44.755401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.242 [2024-12-05 14:02:44.764525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.242 [2024-12-05 14:02:44.764642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.242 [2024-12-05 14:02:44.764659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.242 [2024-12-05 14:02:44.773874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.242 [2024-12-05 14:02:44.773989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.242 [2024-12-05 14:02:44.774007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.242 [2024-12-05 14:02:44.783186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.242 [2024-12-05 14:02:44.783297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.242 [2024-12-05 14:02:44.783314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.242 [2024-12-05 14:02:44.792545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.242 [2024-12-05 14:02:44.792658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.242 [2024-12-05 14:02:44.792676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.242 [2024-12-05 14:02:44.801827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.242 [2024-12-05 14:02:44.801942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.242 [2024-12-05 14:02:44.801960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.242 [2024-12-05 14:02:44.811160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.242 [2024-12-05 14:02:44.811274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.242 [2024-12-05 14:02:44.811291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.242 [2024-12-05 14:02:44.820485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.242 [2024-12-05 14:02:44.820599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.242 [2024-12-05 14:02:44.820616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.501 [2024-12-05 14:02:44.830067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.501 [2024-12-05 14:02:44.830187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.501 [2024-12-05 14:02:44.830208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.501 [2024-12-05 14:02:44.839469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.501 [2024-12-05 14:02:44.839585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.501 [2024-12-05 14:02:44.839603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.501 [2024-12-05 14:02:44.848780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.501 [2024-12-05 14:02:44.848897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.501 [2024-12-05 14:02:44.848914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.501 [2024-12-05 14:02:44.858114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.501 [2024-12-05 14:02:44.858227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.501 [2024-12-05 14:02:44.858245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.501 27239.00 IOPS, 106.40 MiB/s [2024-12-05T13:02:45.088Z] [2024-12-05 14:02:44.867424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e90180) with pdu=0x200016efda78 00:31:02.501 [2024-12-05 14:02:44.867538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.501 [2024-12-05 14:02:44.867555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:02.501 00:31:02.501 Latency(us) 00:31:02.501 [2024-12-05T13:02:45.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.501 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:02.501 nvme0n1 : 2.01 27239.19 106.40 0.00 0.00 4691.02 3495.25 11484.40 00:31:02.501 [2024-12-05T13:02:45.088Z] =================================================================================================================== 00:31:02.501 [2024-12-05T13:02:45.088Z] Total : 27239.19 106.40 0.00 0.00 4691.02 3495.25 11484.40 00:31:02.501 { 00:31:02.501 "results": [ 00:31:02.501 { 00:31:02.501 "job": "nvme0n1", 00:31:02.501 "core_mask": "0x2", 00:31:02.501 "workload": "randwrite", 00:31:02.501 "status": "finished", 00:31:02.501 "queue_depth": 128, 00:31:02.501 "io_size": 4096, 00:31:02.501 "runtime": 2.00586, 00:31:02.501 "iops": 27239.189175715153, 00:31:02.501 "mibps": 106.40308271763732, 00:31:02.501 "io_failed": 0, 00:31:02.501 "io_timeout": 0, 00:31:02.501 "avg_latency_us": 4691.024251253706, 00:31:02.501 "min_latency_us": 3495.2533333333336, 00:31:02.501 "max_latency_us": 11484.40380952381 00:31:02.501 } 00:31:02.501 ], 00:31:02.501 "core_count": 1 00:31:02.501 } 00:31:02.501 14:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:02.501 14:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:02.501 14:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:02.501 | .driver_specific 00:31:02.501 | .nvme_error 00:31:02.501 | .status_code 00:31:02.501 | .command_transient_transport_error' 00:31:02.501 14:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:02.501 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:31:02.501 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 815896 00:31:02.501 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 815896 ']' 00:31:02.501 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 815896 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 815896 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 815896' 00:31:02.759 killing process with pid 815896 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 815896 00:31:02.759 Received shutdown signal, test time was about 2.000000 seconds 00:31:02.759 00:31:02.759 Latency(us) 00:31:02.759 [2024-12-05T13:02:45.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.759 [2024-12-05T13:02:45.346Z] =================================================================================================================== 00:31:02.759 [2024-12-05T13:02:45.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 815896 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=816407 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 816407 /var/tmp/bperf.sock 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 816407 ']' 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:02.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.759 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:03.017 [2024-12-05 14:02:45.348165] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:31:03.017 [2024-12-05 14:02:45.348212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid816407 ] 00:31:03.017 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:03.017 Zero copy mechanism will not be used. 00:31:03.017 [2024-12-05 14:02:45.420294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.017 [2024-12-05 14:02:45.457395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.017 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:03.017 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:03.017 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:03.017 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:03.274 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:03.274 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.274 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:03.274 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.274 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:03.274 14:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:03.531 nvme0n1 00:31:03.531 14:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:03.531 14:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.531 14:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:03.531 14:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.531 14:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:03.531 14:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:03.790 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:03.790 Zero copy mechanism will not be used. 00:31:03.790 Running I/O for 2 seconds... 00:31:03.790 [2024-12-05 14:02:46.136127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.136213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.136242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.140539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.140608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.140631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.144758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.144816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.144838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.148856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.148923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.148944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.152930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.152985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.153005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.157023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.157089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.157109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.161047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.161104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.161123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.165041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.165107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.165126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.169070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.169127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.169146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.173090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.173150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.173170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.177139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.177201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.177220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.181171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.181238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.181263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.185169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.185231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.185250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.189184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.189238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.189257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.193162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.193232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.193251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.197170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.197231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.197250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.201187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.201255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.201274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.205173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.205230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.205248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.209191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.209257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.209276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.213180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.213257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.213275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.217143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.217217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.217235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.221124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.221187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.221206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.225113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.225176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.225194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.229117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.229180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.229198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.233124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.233183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.233201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.790 [2024-12-05 14:02:46.237130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.790 [2024-12-05 14:02:46.237193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.790 [2024-12-05 14:02:46.237212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.241111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.241172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.241191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.245127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.245179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.245197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.249478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.249547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.249565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.253510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.253563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.253581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.257603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.257656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.257676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.261857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.261912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.261930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.265844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.265905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.265925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.269809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.269890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.269908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.274517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.274577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.274595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.278674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.278729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.278748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.282704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.282763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.282781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.286666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.286730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.286752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.290909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.290971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.290990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.295159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.295223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.295242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.299222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.299278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.299296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.303699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.303753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.303771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.307958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.308012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.308030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.312030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.312089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.312108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.316098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.316152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.316171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.320266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.320324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.320342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.324404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.324474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.324492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.328577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.328643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.328661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.332696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.332749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.332767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.336822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.336884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.336903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.341079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.341131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.341149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.345742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.345794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.791 [2024-12-05 14:02:46.345813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.791 [2024-12-05 14:02:46.349858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.791 [2024-12-05 14:02:46.349910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.792 [2024-12-05 14:02:46.349928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.792 [2024-12-05 14:02:46.354351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.792 [2024-12-05 14:02:46.354424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.792 [2024-12-05 14:02:46.354443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.792 [2024-12-05 14:02:46.359340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.792 [2024-12-05 14:02:46.359418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.792 [2024-12-05 14:02:46.359437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.792 [2024-12-05 14:02:46.364520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.792 [2024-12-05 14:02:46.364602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.792 [2024-12-05 14:02:46.364623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.792 [2024-12-05 14:02:46.369309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.792 [2024-12-05 14:02:46.369377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.792 [2024-12-05 14:02:46.369396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.792 [2024-12-05 14:02:46.374004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:03.792 [2024-12-05 14:02:46.374065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.792 [2024-12-05 14:02:46.374084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.051 [2024-12-05 14:02:46.378550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.051 [2024-12-05 14:02:46.378654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.051 [2024-12-05 14:02:46.378673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.051 [2024-12-05 14:02:46.383051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.051 [2024-12-05 14:02:46.383142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.051 [2024-12-05 14:02:46.383161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.051 [2024-12-05 14:02:46.387706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.051 [2024-12-05 14:02:46.387769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.051 [2024-12-05 14:02:46.387789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.051 [2024-12-05 14:02:46.391913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.051 [2024-12-05 14:02:46.391968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.051 [2024-12-05 14:02:46.391987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.051 [2024-12-05 14:02:46.396112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.051 [2024-12-05 14:02:46.396182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.051 [2024-12-05 14:02:46.396201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.051 [2024-12-05 14:02:46.400338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.051 [2024-12-05 14:02:46.400406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.051 [2024-12-05 14:02:46.400429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.051 [2024-12-05 14:02:46.404538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.051 [2024-12-05 14:02:46.404606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.051 [2024-12-05 14:02:46.404625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.051 [2024-12-05 14:02:46.408638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.051 [2024-12-05 14:02:46.408697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.051 [2024-12-05 14:02:46.408716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.051 [2024-12-05 14:02:46.412728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.051 [2024-12-05 14:02:46.412785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.051 [2024-12-05 14:02:46.412804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.051 [2024-12-05 14:02:46.416887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.051 [2024-12-05 14:02:46.416963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.051 [2024-12-05 14:02:46.416983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.051 [2024-12-05 14:02:46.421000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.051 [2024-12-05 14:02:46.421062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.051 [2024-12-05 14:02:46.421080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.051 [2024-12-05 14:02:46.425473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.051 [2024-12-05 14:02:46.425564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.051 [2024-12-05 14:02:46.425583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.051 [2024-12-05 14:02:46.429956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.051 [2024-12-05 14:02:46.430023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.430043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.434246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.434307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.434325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.438906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.438965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.438984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.443241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.443310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.443329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.447385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.447451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.447470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.451497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.451562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.451581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.455597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.455676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.455695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.459747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.459809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.459828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.463873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.463944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.463962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.467966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.468024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.468042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.472090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.472140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.472158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.476204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.476271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.476290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.480243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.480295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.480314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.484304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.484376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.484412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.488458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.488516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.488535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.492552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.492617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.492636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.496701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.496767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.496786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.500788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.500849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.500867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.504901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.504967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.504986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.509035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.509113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.509135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.513180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.513245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.513264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.517250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.517316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.517335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.521346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.521416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.521435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.525450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.525519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.525538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.529563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.529624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.529643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.533669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.533733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.533751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.537835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.537907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.052 [2024-12-05 14:02:46.537926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.052 [2024-12-05 14:02:46.541964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.052 [2024-12-05 14:02:46.542015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.542033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.546074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.546132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.546151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.550194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.550258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.550276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.554500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.554555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.554574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.558996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.559054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.559073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.563412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.563464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.563483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.567814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.567888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.567907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.572390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.572466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.572485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.577020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.577117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.577135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.581613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.581670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.581687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.586132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.586189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.586208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.590774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.590833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.590852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.595245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.595335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.595354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.599713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.599799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.599818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.604156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.604281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.604299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.608641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.608692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.608710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.613235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.613299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.613318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.617748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.617814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.617833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.622413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.622471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.622493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.626885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.626987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.627006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.053 [2024-12-05 14:02:46.631304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.053 [2024-12-05 14:02:46.631382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.053 [2024-12-05 14:02:46.631401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.313 [2024-12-05 14:02:46.635788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.313 [2024-12-05 14:02:46.635843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.313 [2024-12-05 14:02:46.635863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.313 [2024-12-05 14:02:46.640224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.313 [2024-12-05 14:02:46.640278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.313 [2024-12-05 14:02:46.640297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.313 [2024-12-05 14:02:46.645069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.313 [2024-12-05 14:02:46.645151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.313 [2024-12-05 14:02:46.645170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.313 [2024-12-05 14:02:46.649613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.313 [2024-12-05 14:02:46.649721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.313 [2024-12-05 14:02:46.649740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.313 [2024-12-05 14:02:46.654249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.313 [2024-12-05 14:02:46.654353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.654377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.659249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.659617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.659639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.665853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.666170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.666190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.671232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.671485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.671505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.676072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.676335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.676355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.680792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.681024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.681043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.685510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.685755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.685775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.690211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.690470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.690491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.694860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.695126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.695146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.699447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.699703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.699723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.703691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.703952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.703972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.708185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.708456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.708476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.713922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.714281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.714302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.719210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.719473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.719493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.724562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.724828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.724848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.729528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.729779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.729799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.733466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.733712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.733731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.737398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.737669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.737688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.741328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.741599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.741619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.745249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.745531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.745555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.749624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.749891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.749912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.754257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.754525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.754545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.758812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.759087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.759106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.763531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.763793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.763813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.768923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.769156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.769176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.774481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.774749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.774769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.781071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.314 [2024-12-05 14:02:46.781342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.314 [2024-12-05 14:02:46.781362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.314 [2024-12-05 14:02:46.787010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.787249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.787269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.793136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.793404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.793425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.798035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.798300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.798320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.802141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.802410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.802430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.806249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.806496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.806515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.810240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.810493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.810513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.814299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.814542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.814563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.818343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.818603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.818623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.822323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.822585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.822605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.826341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.826609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.826629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.830289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.830557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.830577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.834613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.834840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.834860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.838803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.839052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.839072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.842752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.842985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.843005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.846631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.846885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.846906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.850569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.850813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.850833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.854517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.854758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.854778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.858448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.858695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.858715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.862383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.862634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.862658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.866279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.866529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.866549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.870204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.870457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.870477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.874099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.874347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.874371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.878021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.878266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.878286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.881968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.882216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.882236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.886069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.886302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.886322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.890396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.890648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.890668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.315 [2024-12-05 14:02:46.894400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.315 [2024-12-05 14:02:46.894653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.315 [2024-12-05 14:02:46.894675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.898754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.899006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.899029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.903435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.903685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.903706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.908322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.908562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.908583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.913090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.913330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.913350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.917902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.918139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.918159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.922544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.922788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.922808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.927203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.927457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.927477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.932154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.932387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.932408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.937129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.937362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.937388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.941497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.941736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.941756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.946119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.946353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.946378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.950976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.951208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.951228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.955583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.955819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.955840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.959861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.960091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.960111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.964269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.964502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.964523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.969024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.969265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.969286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.974089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.974308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.974328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.978921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.595 [2024-12-05 14:02:46.979155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-05 14:02:46.979178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.595 [2024-12-05 14:02:46.983549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:46.983770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:46.983790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:46.987963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:46.988195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:46.988215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:46.992349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:46.992585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:46.992605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:46.996483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:46.996721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:46.996741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.000759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.000995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.001015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.005025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.005258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.005278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.009426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.009676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.009696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.013768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.014011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.014032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.017855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.018083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.018102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.022039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.022268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.022288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.026396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.026647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.026668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.030466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.030706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.030726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.034516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.034761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.034781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.038540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.038786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.038806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.042586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.042840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.042860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.046646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.046887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.046907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.050717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.050957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.050978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.055082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.055324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.055346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.059709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.059951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.059971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.065171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.065338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.065357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.069790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.070031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.070051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.074498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.074727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.074747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.078947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.079176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.079196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.083383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.083617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.083637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.088107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.088340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.088360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.093511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.093596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.093618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.098032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.098262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-05 14:02:47.098283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.596 [2024-12-05 14:02:47.102404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.596 [2024-12-05 14:02:47.102644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.102664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.106558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.106790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.106811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.110754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.110999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.111020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.115104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.115349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.115374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.119405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.119641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.119662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.123827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.124077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.124097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.128194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.128443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.128463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.132614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.132857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.132877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.137006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.138198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.138219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.597 7098.00 IOPS, 887.25 MiB/s [2024-12-05T13:02:47.184Z] [2024-12-05 14:02:47.142439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.142685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.142706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.146892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.147144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.147166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.151326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.151577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.151599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.155817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.156089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.156110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.160200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.160468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.160489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.164635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.164893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.164914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.169012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.169268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.169289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.173548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.173796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.173818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.597 [2024-12-05 14:02:47.178423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.597 [2024-12-05 14:02:47.178672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.597 [2024-12-05 14:02:47.178694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.857 [2024-12-05 14:02:47.183188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.857 [2024-12-05 14:02:47.183437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-05 14:02:47.183460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.857 [2024-12-05 14:02:47.188373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.857 [2024-12-05 14:02:47.188621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-05 14:02:47.188641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.857 [2024-12-05 14:02:47.193441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.857 [2024-12-05 14:02:47.193701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.193722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.198279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.198541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.198563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.202897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.203144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.203164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.207446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.207693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.207713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.212358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.212608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.212636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.217042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.217296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.217317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.221774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.222035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.222056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.226581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.226827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.226848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.231189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.231439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.231459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.235630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.235872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.235893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.240351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.240613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.240635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.245447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.245698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.245718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.251253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.251513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.251534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.256136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.256401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.256423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.260867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.261112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.261133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.265582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.265828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.265848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.270795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.271047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.271068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.275914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.276161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.276181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.280777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.858 [2024-12-05 14:02:47.281023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.858 [2024-12-05 14:02:47.281044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.858 [2024-12-05 14:02:47.285481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.285739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.285760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.290455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.290699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.290720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.295755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.296000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.296022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.301382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.301626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.301648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.307113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.307379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.307401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.312378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.312634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.312656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.317276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.317524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.317546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.321942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.322188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.322210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.326784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.327033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.327055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.331549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.331807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.331828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.336426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.336673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.336694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.341222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.341482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.341507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.346022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.346264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.346285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.350805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.351047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.351068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.355197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.355450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.355471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.359625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.359881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.359903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.364009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.364256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.364278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.368445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.368696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.368717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.372819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.373078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.373099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.377248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.377496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.377518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.381672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.382104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.382125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.386453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.386700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.386721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.391087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.859 [2024-12-05 14:02:47.391332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.859 [2024-12-05 14:02:47.391353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.859 [2024-12-05 14:02:47.396805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.860 [2024-12-05 14:02:47.397062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.860 [2024-12-05 14:02:47.397083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.860 [2024-12-05 14:02:47.402145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.860 [2024-12-05 14:02:47.402403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.860 [2024-12-05 14:02:47.402426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.860 [2024-12-05 14:02:47.407232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.860 [2024-12-05 14:02:47.407488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.860 [2024-12-05 14:02:47.407509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.860 [2024-12-05 14:02:47.411907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.860 [2024-12-05 14:02:47.412155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.860 [2024-12-05 14:02:47.412176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.860 [2024-12-05 14:02:47.416482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.860 [2024-12-05 14:02:47.416745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.860 [2024-12-05 14:02:47.416766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.860 [2024-12-05 14:02:47.421074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.860 [2024-12-05 14:02:47.421330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.860 [2024-12-05 14:02:47.421352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:04.860 [2024-12-05 14:02:47.425462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.860 [2024-12-05 14:02:47.425708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.860 [2024-12-05 14:02:47.425729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:04.860 [2024-12-05 14:02:47.429747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.860 [2024-12-05 14:02:47.429994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.860 [2024-12-05 14:02:47.430016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:04.860 [2024-12-05 14:02:47.434039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.860 [2024-12-05 14:02:47.434286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.860 [2024-12-05 14:02:47.434308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.860 [2024-12-05 14:02:47.438375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:04.860 [2024-12-05 14:02:47.438627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.860 [2024-12-05 14:02:47.438649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.442961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.443213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.443234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.447267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.447536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.447558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.451717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.451973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.451995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.456644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.456890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.456912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.461352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.461606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.461631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.467294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.467545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.467567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.473488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.473734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.473755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.479866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.480111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.480132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.486299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.486562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.486584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.492751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.493008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.493029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.499125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.499386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.499408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.504066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.504310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.504331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.509326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.509584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.509607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.514669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.514921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.514942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.520246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.520496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.520519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.524558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.524802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.524823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.528922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.529166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.529187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.533447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.533691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.533712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.538155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.538402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.538422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.543037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.543092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.543110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.548259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.548509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.548531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.552976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.553221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.553243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.557692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.557937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.121 [2024-12-05 14:02:47.557958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.121 [2024-12-05 14:02:47.562950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.121 [2024-12-05 14:02:47.563196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.563218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.568183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.568330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.568349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.573019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.573262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.573284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.577647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.577904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.577925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.582095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.582340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.582362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.586522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.586768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.586789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.591122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.591386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.591408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.595772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.596029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.596054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.600494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.600741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.600762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.604901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.605145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.605166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.609352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.609610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.609633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.613958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.614216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.614235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.618817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.619063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.619084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.623988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.624235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.624256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.629052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.629297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.629318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.634390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.634637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.634659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.640327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.640587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.640608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.645677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.645933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.645954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.650360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.650624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.650644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.654934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.655184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.655205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.659396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.659654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.659676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.664399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.664659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.664681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.669215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.669465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.669487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.673979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.674225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.674246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.678702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.678947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.678968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.683607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.683863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.683884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.688242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.688492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.122 [2024-12-05 14:02:47.688513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.122 [2024-12-05 14:02:47.693107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.122 [2024-12-05 14:02:47.693350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.123 [2024-12-05 14:02:47.693378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.123 [2024-12-05 14:02:47.697653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.123 [2024-12-05 14:02:47.697898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.123 [2024-12-05 14:02:47.697920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.123 [2024-12-05 14:02:47.702487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.123 [2024-12-05 14:02:47.702762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.123 [2024-12-05 14:02:47.702784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.707685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.707943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.707964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.712765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.713014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.713046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.718164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.718416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.718437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.723379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.723637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.723662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.728952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.729195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.729216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.733954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.734198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.734218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.738948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.739194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.739214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.743456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.743702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.743723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.747721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.747797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.747815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.752612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.752855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.752877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.757422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.757669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.757690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.762133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.762381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.762403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.766827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.767073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.767094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.771689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.771933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.771954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.776167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.776418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.776440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.780921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.781165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.781187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.785401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.785667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.785689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.789934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.790195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.790216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.794506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.794763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.794784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.798914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.799159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.799180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.803342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.803600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.803621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.807778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.808026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.808047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.812475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.383 [2024-12-05 14:02:47.812724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.383 [2024-12-05 14:02:47.812745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.383 [2024-12-05 14:02:47.817029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.817274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.817295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.821630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.821877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.821898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.826018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.826268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.826288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.830757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.831000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.831022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.835365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.835619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.835641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.840058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.840303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.840323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.844572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.844816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.844841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.848950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.849194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.849215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.853657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.853912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.853933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.858144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.858406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.858428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.862509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.862764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.862785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.866934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.867181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.867202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.871330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.871575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.871596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.875863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.876109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.876130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.880430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.880679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.880700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.884941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.885190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.885211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.889491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.889735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.889756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.894146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.894400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.894420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.898656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.898899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.898921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.903032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.903292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.903311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.907818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.908085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.908107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.912332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.912591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.912612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.917105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.917348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.917374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.921963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.922207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.922229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.927029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.927273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.927294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.932121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.932386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.932406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.937793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.384 [2024-12-05 14:02:47.938040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.384 [2024-12-05 14:02:47.938062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.384 [2024-12-05 14:02:47.943460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.385 [2024-12-05 14:02:47.943706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.385 [2024-12-05 14:02:47.943727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.385 [2024-12-05 14:02:47.948298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.385 [2024-12-05 14:02:47.948548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.385 [2024-12-05 14:02:47.948570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.385 [2024-12-05 14:02:47.952823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.385 [2024-12-05 14:02:47.953068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.385 [2024-12-05 14:02:47.953089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.385 [2024-12-05 14:02:47.957302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.385 [2024-12-05 14:02:47.957556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.385 [2024-12-05 14:02:47.957577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.385 [2024-12-05 14:02:47.961778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.385 [2024-12-05 14:02:47.962021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.385 [2024-12-05 14:02:47.962042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.385 [2024-12-05 14:02:47.966678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.385 [2024-12-05 14:02:47.966930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.385 [2024-12-05 14:02:47.966956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.644 [2024-12-05 14:02:47.971350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.644 [2024-12-05 14:02:47.971604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.644 [2024-12-05 14:02:47.971626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.644 [2024-12-05 14:02:47.975825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.644 [2024-12-05 14:02:47.976085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.644 [2024-12-05 14:02:47.976105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:47.980359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:47.980616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:47.980637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:47.985787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:47.986031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:47.986052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:47.991993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:47.992238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:47.992259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:47.998487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:47.998639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:47.998658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.004999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.005274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.005295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.011052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.011373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.011395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.017666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.017951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.017972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.023646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.023927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.023948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.029619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.029898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.029919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.035792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.036072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.036093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.041977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.042280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.042302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.047997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.048275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.048297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.054347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.054625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.054647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.060493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.060757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.060779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.066585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.066889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.066909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.072544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.072848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.072868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.078671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.078895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.078916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.084225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.084509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.084530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.090199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.090475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.090496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.096058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.096345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.096373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.102234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.102499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.102520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.108609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.108844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.108865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.113576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.113791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.113813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.118146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.118363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.118392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.122984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.123198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.123217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.127762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.127977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.645 [2024-12-05 14:02:48.127996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:05.645 [2024-12-05 14:02:48.132191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.645 [2024-12-05 14:02:48.132412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.646 [2024-12-05 14:02:48.132434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:05.646 [2024-12-05 14:02:48.137421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e904c0) with pdu=0x200016eff3c8 00:31:05.646 [2024-12-05 14:02:48.137759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.646 [2024-12-05 14:02:48.137778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.646 6684.50 IOPS, 835.56 MiB/s 00:31:05.646 Latency(us) 00:31:05.646 [2024-12-05T13:02:48.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.646 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:05.646 nvme0n1 : 2.00 6680.05 835.01 0.00 0.00 2390.81 1810.04 6803.26 00:31:05.646 [2024-12-05T13:02:48.233Z] =================================================================================================================== 00:31:05.646 [2024-12-05T13:02:48.233Z] Total : 6680.05 835.01 0.00 0.00 2390.81 1810.04 6803.26 00:31:05.646 { 00:31:05.646 "results": [ 00:31:05.646 { 00:31:05.646 "job": "nvme0n1", 00:31:05.646 "core_mask": "0x2", 00:31:05.646 "workload": "randwrite", 00:31:05.646 "status": "finished", 00:31:05.646 "queue_depth": 16, 00:31:05.646 "io_size": 131072, 00:31:05.646 "runtime": 2.004176, 00:31:05.646 "iops": 6680.05205131685, 00:31:05.646 "mibps": 835.0065064146063, 00:31:05.646 "io_failed": 0, 00:31:05.646 "io_timeout": 0, 00:31:05.646 "avg_latency_us": 2390.8059515984464, 00:31:05.646 "min_latency_us": 1810.0419047619048, 00:31:05.646 "max_latency_us": 6803.260952380952 00:31:05.646 } 00:31:05.646 ], 00:31:05.646 "core_count": 1 00:31:05.646 } 00:31:05.646 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:05.646 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:05.646 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:05.646 | .driver_specific 00:31:05.646 | .nvme_error 00:31:05.646 | .status_code 00:31:05.646 | .command_transient_transport_error' 00:31:05.646 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:05.905 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 432 > 0 )) 00:31:05.905 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 816407 00:31:05.905 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 816407 ']' 00:31:05.905 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 816407 00:31:05.905 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:05.905 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.905 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 816407 00:31:05.905 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:05.905 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:05.905 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 816407' 00:31:05.905 killing process with pid 816407 00:31:05.905 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 816407 00:31:05.905 Received shutdown signal, test time was about 2.000000 seconds 00:31:05.905 00:31:05.905 Latency(us) 00:31:05.905 [2024-12-05T13:02:48.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.905 [2024-12-05T13:02:48.492Z] =================================================================================================================== 00:31:05.905 [2024-12-05T13:02:48.492Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.905 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 816407 00:31:06.164 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 814636 00:31:06.164 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 814636 ']' 00:31:06.164 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 814636 00:31:06.164 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:06.164 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.164 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 814636 00:31:06.164 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:06.164 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:06.164 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 814636' 00:31:06.164 killing process with pid 814636 00:31:06.164 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 814636 00:31:06.164 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 814636 00:31:06.423 00:31:06.423 real 0m14.180s 00:31:06.423 user 0m27.142s 00:31:06.423 sys 0m4.595s 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:06.423 ************************************ 00:31:06.423 END TEST nvmf_digest_error 00:31:06.423 ************************************ 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.423 rmmod nvme_tcp 00:31:06.423 rmmod nvme_fabrics 00:31:06.423 rmmod nvme_keyring 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 814636 ']' 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 814636 00:31:06.423 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 814636 ']' 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 814636 00:31:06.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (814636) - No such process 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 814636 is not found' 00:31:06.424 Process with pid 814636 is not found 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.424 14:02:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.957 14:02:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:08.957 00:31:08.957 real 0m36.564s 00:31:08.957 user 0m55.799s 00:31:08.957 sys 0m13.618s 00:31:08.957 14:02:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:08.957 14:02:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:08.957 ************************************ 00:31:08.957 END TEST nvmf_digest 00:31:08.957 ************************************ 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.957 ************************************ 00:31:08.957 START TEST nvmf_bdevperf 00:31:08.957 ************************************ 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:08.957 * Looking for test storage... 00:31:08.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.957 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:08.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.958 --rc genhtml_branch_coverage=1 00:31:08.958 --rc genhtml_function_coverage=1 00:31:08.958 --rc genhtml_legend=1 00:31:08.958 --rc geninfo_all_blocks=1 00:31:08.958 --rc geninfo_unexecuted_blocks=1 00:31:08.958 00:31:08.958 ' 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:08.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.958 --rc genhtml_branch_coverage=1 00:31:08.958 --rc genhtml_function_coverage=1 00:31:08.958 --rc genhtml_legend=1 00:31:08.958 --rc geninfo_all_blocks=1 00:31:08.958 --rc geninfo_unexecuted_blocks=1 00:31:08.958 00:31:08.958 ' 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:08.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.958 --rc genhtml_branch_coverage=1 00:31:08.958 --rc genhtml_function_coverage=1 00:31:08.958 --rc genhtml_legend=1 00:31:08.958 --rc geninfo_all_blocks=1 00:31:08.958 --rc geninfo_unexecuted_blocks=1 00:31:08.958 00:31:08.958 ' 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:08.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.958 --rc genhtml_branch_coverage=1 00:31:08.958 --rc genhtml_function_coverage=1 00:31:08.958 --rc genhtml_legend=1 00:31:08.958 --rc geninfo_all_blocks=1 00:31:08.958 --rc geninfo_unexecuted_blocks=1 00:31:08.958 00:31:08.958 ' 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:08.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:08.958 14:02:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:15.531 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:15.531 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:15.531 Found net devices under 0000:86:00.0: cvl_0_0 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.531 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:15.532 Found net devices under 0000:86:00.1: cvl_0_1 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.532 14:02:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:15.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:31:15.532 00:31:15.532 --- 10.0.0.2 ping statistics --- 00:31:15.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.532 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:31:15.532 00:31:15.532 --- 10.0.0.1 ping statistics --- 00:31:15.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.532 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=820415 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 820415 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 820415 ']' 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:15.532 [2024-12-05 14:02:57.269816] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:31:15.532 [2024-12-05 14:02:57.269858] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.532 [2024-12-05 14:02:57.346183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:15.532 [2024-12-05 14:02:57.387848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.532 [2024-12-05 14:02:57.387885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.532 [2024-12-05 14:02:57.387892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.532 [2024-12-05 14:02:57.387899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.532 [2024-12-05 14:02:57.387905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.532 [2024-12-05 14:02:57.389328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.532 [2024-12-05 14:02:57.389439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.532 [2024-12-05 14:02:57.389439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:15.532 [2024-12-05 14:02:57.525780] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:15.532 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:15.533 Malloc0 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:15.533 [2024-12-05 14:02:57.587728] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:15.533 { 00:31:15.533 "params": { 00:31:15.533 "name": "Nvme$subsystem", 00:31:15.533 "trtype": "$TEST_TRANSPORT", 00:31:15.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.533 "adrfam": "ipv4", 00:31:15.533 "trsvcid": "$NVMF_PORT", 00:31:15.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.533 "hdgst": ${hdgst:-false}, 00:31:15.533 "ddgst": ${ddgst:-false} 00:31:15.533 }, 00:31:15.533 "method": "bdev_nvme_attach_controller" 00:31:15.533 } 00:31:15.533 EOF 00:31:15.533 )") 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:15.533 14:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:15.533 "params": { 00:31:15.533 "name": "Nvme1", 00:31:15.533 "trtype": "tcp", 00:31:15.533 "traddr": "10.0.0.2", 00:31:15.533 "adrfam": "ipv4", 00:31:15.533 "trsvcid": "4420", 00:31:15.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:15.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:15.533 "hdgst": false, 00:31:15.533 "ddgst": false 00:31:15.533 }, 00:31:15.533 "method": "bdev_nvme_attach_controller" 00:31:15.533 }' 00:31:15.533 [2024-12-05 14:02:57.637901] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:31:15.533 [2024-12-05 14:02:57.637952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid820510 ] 00:31:15.533 [2024-12-05 14:02:57.715271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.533 [2024-12-05 14:02:57.756480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.533 Running I/O for 1 seconds... 00:31:16.470 11178.00 IOPS, 43.66 MiB/s 00:31:16.470 Latency(us) 00:31:16.470 [2024-12-05T13:02:59.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.470 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:16.470 Verification LBA range: start 0x0 length 0x4000 00:31:16.470 Nvme1n1 : 1.01 11229.08 43.86 0.00 0.00 11349.71 1068.86 13419.28 00:31:16.470 [2024-12-05T13:02:59.057Z] =================================================================================================================== 00:31:16.470 [2024-12-05T13:02:59.057Z] Total : 11229.08 43.86 0.00 0.00 11349.71 1068.86 13419.28 00:31:16.729 14:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=820797 00:31:16.729 14:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:31:16.729 14:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:16.729 14:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:16.729 14:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:16.729 14:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:16.729 14:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:16.729 14:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:16.729 { 00:31:16.729 "params": { 00:31:16.729 "name": "Nvme$subsystem", 00:31:16.729 "trtype": "$TEST_TRANSPORT", 00:31:16.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.729 "adrfam": "ipv4", 00:31:16.729 "trsvcid": "$NVMF_PORT", 00:31:16.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.729 "hdgst": ${hdgst:-false}, 00:31:16.729 "ddgst": ${ddgst:-false} 00:31:16.729 }, 00:31:16.729 "method": "bdev_nvme_attach_controller" 00:31:16.729 } 00:31:16.729 EOF 00:31:16.729 )") 00:31:16.729 14:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:16.730 14:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:16.730 14:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:16.730 14:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:16.730 "params": { 00:31:16.730 "name": "Nvme1", 00:31:16.730 "trtype": "tcp", 00:31:16.730 "traddr": "10.0.0.2", 00:31:16.730 "adrfam": "ipv4", 00:31:16.730 "trsvcid": "4420", 00:31:16.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:16.730 "hdgst": false, 00:31:16.730 "ddgst": false 00:31:16.730 }, 00:31:16.730 "method": "bdev_nvme_attach_controller" 00:31:16.730 }' 00:31:16.730 [2024-12-05 14:02:59.175739] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:31:16.730 [2024-12-05 14:02:59.175790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid820797 ] 00:31:16.730 [2024-12-05 14:02:59.249725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.730 [2024-12-05 14:02:59.289960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.298 Running I/O for 15 seconds... 00:31:19.170 11298.00 IOPS, 44.13 MiB/s [2024-12-05T13:03:02.327Z] 11403.50 IOPS, 44.54 MiB/s [2024-12-05T13:03:02.327Z] 14:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 820415 00:31:19.740 14:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:31:19.740 [2024-12-05 14:03:02.144733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.740 [2024-12-05 14:03:02.144770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.740 [2024-12-05 14:03:02.144788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.740 [2024-12-05 14:03:02.144798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.740 [2024-12-05 14:03:02.144807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.740 [2024-12-05 14:03:02.144816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.740 [2024-12-05 14:03:02.144824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.740 [2024-12-05 14:03:02.144832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.740 [2024-12-05 14:03:02.144841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.740 [2024-12-05 14:03:02.144849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.740 [2024-12-05 14:03:02.144857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.740 [2024-12-05 14:03:02.144865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.740 [2024-12-05 14:03:02.144874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.740 [2024-12-05 14:03:02.144880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.740 [2024-12-05 14:03:02.144890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.740 [2024-12-05 14:03:02.144897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.740 [2024-12-05 14:03:02.144907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.740 [2024-12-05 14:03:02.144914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.740 [2024-12-05 14:03:02.144930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.740 [2024-12-05 14:03:02.144937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.740 [2024-12-05 14:03:02.144947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.740 [2024-12-05 14:03:02.144956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.740 [2024-12-05 14:03:02.144964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.740 [2024-12-05 14:03:02.144972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.740 [2024-12-05 14:03:02.144980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.740 [2024-12-05 14:03:02.144988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.740 [2024-12-05 14:03:02.144997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.741 [2024-12-05 14:03:02.145646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.741 [2024-12-05 14:03:02.145652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.742 [2024-12-05 14:03:02.145877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.742 [2024-12-05 14:03:02.145892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.742 [2024-12-05 14:03:02.145908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.742 [2024-12-05 14:03:02.145922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.742 [2024-12-05 14:03:02.145937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.742 [2024-12-05 14:03:02.145954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.145985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.145993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.742 [2024-12-05 14:03:02.146199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.742 [2024-12-05 14:03:02.146241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.742 [2024-12-05 14:03:02.146250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.743 [2024-12-05 14:03:02.146879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.146886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159b6c0 is same with the state(6) to be set 00:31:19.743 [2024-12-05 14:03:02.146895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.743 [2024-12-05 14:03:02.146901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.743 [2024-12-05 14:03:02.146907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98952 len:8 PRP1 0x0 PRP2 0x0 00:31:19.743 [2024-12-05 14:03:02.146915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.743 [2024-12-05 14:03:02.149761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.743 [2024-12-05 14:03:02.149814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.743 [2024-12-05 14:03:02.150428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.743 [2024-12-05 14:03:02.150474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.743 [2024-12-05 14:03:02.150499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.743 [2024-12-05 14:03:02.151086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.744 [2024-12-05 14:03:02.151460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.744 [2024-12-05 14:03:02.151468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.744 [2024-12-05 14:03:02.151477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.744 [2024-12-05 14:03:02.151485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:19.744 [2024-12-05 14:03:02.163016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.744 [2024-12-05 14:03:02.163307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.744 [2024-12-05 14:03:02.163355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.744 [2024-12-05 14:03:02.163407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.744 [2024-12-05 14:03:02.163919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.744 [2024-12-05 14:03:02.164091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.744 [2024-12-05 14:03:02.164100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.744 [2024-12-05 14:03:02.164107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.744 [2024-12-05 14:03:02.164115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:19.744 [2024-12-05 14:03:02.175761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.744 [2024-12-05 14:03:02.176050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.744 [2024-12-05 14:03:02.176097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.744 [2024-12-05 14:03:02.176123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.744 [2024-12-05 14:03:02.176633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.744 [2024-12-05 14:03:02.176795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.744 [2024-12-05 14:03:02.176804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.744 [2024-12-05 14:03:02.176811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.744 [2024-12-05 14:03:02.176817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:19.744 [2024-12-05 14:03:02.188510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.744 [2024-12-05 14:03:02.188899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.744 [2024-12-05 14:03:02.188952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.744 [2024-12-05 14:03:02.188976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.744 [2024-12-05 14:03:02.189520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.744 [2024-12-05 14:03:02.189682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.744 [2024-12-05 14:03:02.189691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.744 [2024-12-05 14:03:02.189697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.744 [2024-12-05 14:03:02.189704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:19.744 [2024-12-05 14:03:02.201376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.744 [2024-12-05 14:03:02.201790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.744 [2024-12-05 14:03:02.201829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.744 [2024-12-05 14:03:02.201854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.744 [2024-12-05 14:03:02.202430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.744 [2024-12-05 14:03:02.202831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.744 [2024-12-05 14:03:02.202849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.744 [2024-12-05 14:03:02.202864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.744 [2024-12-05 14:03:02.202878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:19.744 [2024-12-05 14:03:02.216145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.744 [2024-12-05 14:03:02.216658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.744 [2024-12-05 14:03:02.216681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.744 [2024-12-05 14:03:02.216693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.744 [2024-12-05 14:03:02.216949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.744 [2024-12-05 14:03:02.217206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.744 [2024-12-05 14:03:02.217220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.744 [2024-12-05 14:03:02.217229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.744 [2024-12-05 14:03:02.217240] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:19.744 [2024-12-05 14:03:02.229189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.744 [2024-12-05 14:03:02.229604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.744 [2024-12-05 14:03:02.229622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.744 [2024-12-05 14:03:02.229630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.744 [2024-12-05 14:03:02.229805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.744 [2024-12-05 14:03:02.229981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.744 [2024-12-05 14:03:02.229991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.744 [2024-12-05 14:03:02.229999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.744 [2024-12-05 14:03:02.230007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:19.744 [2024-12-05 14:03:02.241930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.744 [2024-12-05 14:03:02.242271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.744 [2024-12-05 14:03:02.242288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.744 [2024-12-05 14:03:02.242295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.744 [2024-12-05 14:03:02.242463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.744 [2024-12-05 14:03:02.242624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.744 [2024-12-05 14:03:02.242633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.744 [2024-12-05 14:03:02.242639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.744 [2024-12-05 14:03:02.242649] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:19.744 [2024-12-05 14:03:02.254785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.744 [2024-12-05 14:03:02.255107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.744 [2024-12-05 14:03:02.255124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.744 [2024-12-05 14:03:02.255132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.744 [2024-12-05 14:03:02.255292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.744 [2024-12-05 14:03:02.255458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.744 [2024-12-05 14:03:02.255470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.744 [2024-12-05 14:03:02.255477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.744 [2024-12-05 14:03:02.255484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:19.744 [2024-12-05 14:03:02.267630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.744 [2024-12-05 14:03:02.268051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.744 [2024-12-05 14:03:02.268105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.744 [2024-12-05 14:03:02.268131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.744 [2024-12-05 14:03:02.268669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.744 [2024-12-05 14:03:02.268831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.744 [2024-12-05 14:03:02.268841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.744 [2024-12-05 14:03:02.268848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.744 [2024-12-05 14:03:02.268854] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:19.744 [2024-12-05 14:03:02.280462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.744 [2024-12-05 14:03:02.280813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.744 [2024-12-05 14:03:02.280830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.744 [2024-12-05 14:03:02.280838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.745 [2024-12-05 14:03:02.281009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.745 [2024-12-05 14:03:02.281185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.745 [2024-12-05 14:03:02.281195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.745 [2024-12-05 14:03:02.281202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.745 [2024-12-05 14:03:02.281208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:19.745 [2024-12-05 14:03:02.293292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.745 [2024-12-05 14:03:02.293654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.745 [2024-12-05 14:03:02.293670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.745 [2024-12-05 14:03:02.293679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.745 [2024-12-05 14:03:02.293838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.745 [2024-12-05 14:03:02.293997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.745 [2024-12-05 14:03:02.294007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.745 [2024-12-05 14:03:02.294013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.745 [2024-12-05 14:03:02.294021] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:19.745 [2024-12-05 14:03:02.306044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.745 [2024-12-05 14:03:02.306442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.745 [2024-12-05 14:03:02.306490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.745 [2024-12-05 14:03:02.306515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.745 [2024-12-05 14:03:02.307022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.745 [2024-12-05 14:03:02.307184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.745 [2024-12-05 14:03:02.307194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.745 [2024-12-05 14:03:02.307200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.745 [2024-12-05 14:03:02.307206] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:19.745 [2024-12-05 14:03:02.318979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:19.745 [2024-12-05 14:03:02.319385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.745 [2024-12-05 14:03:02.319403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:19.745 [2024-12-05 14:03:02.319411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:19.745 [2024-12-05 14:03:02.319586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:19.745 [2024-12-05 14:03:02.319760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:19.745 [2024-12-05 14:03:02.319770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:19.745 [2024-12-05 14:03:02.319776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:19.745 [2024-12-05 14:03:02.319783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.008 [2024-12-05 14:03:02.331816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.008 [2024-12-05 14:03:02.332213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.008 [2024-12-05 14:03:02.332229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.008 [2024-12-05 14:03:02.332239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.008 [2024-12-05 14:03:02.332407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.008 [2024-12-05 14:03:02.332589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.008 [2024-12-05 14:03:02.332599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.008 [2024-12-05 14:03:02.332605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.008 [2024-12-05 14:03:02.332612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.008 [2024-12-05 14:03:02.344581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.008 [2024-12-05 14:03:02.344977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.008 [2024-12-05 14:03:02.344994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.008 [2024-12-05 14:03:02.345001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.008 [2024-12-05 14:03:02.345160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.008 [2024-12-05 14:03:02.345321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.008 [2024-12-05 14:03:02.345330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.008 [2024-12-05 14:03:02.345337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.008 [2024-12-05 14:03:02.345343] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.008 [2024-12-05 14:03:02.357330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.008 [2024-12-05 14:03:02.357767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.008 [2024-12-05 14:03:02.357813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.008 [2024-12-05 14:03:02.357837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.008 [2024-12-05 14:03:02.358343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.009 [2024-12-05 14:03:02.358511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.009 [2024-12-05 14:03:02.358521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.009 [2024-12-05 14:03:02.358527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.009 [2024-12-05 14:03:02.358534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.009 [2024-12-05 14:03:02.370061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.009 [2024-12-05 14:03:02.370497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.009 [2024-12-05 14:03:02.370515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.009 [2024-12-05 14:03:02.370523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.009 [2024-12-05 14:03:02.370683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.009 [2024-12-05 14:03:02.370845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.009 [2024-12-05 14:03:02.370855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.009 [2024-12-05 14:03:02.370862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.009 [2024-12-05 14:03:02.370868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.009 [2024-12-05 14:03:02.382955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.009 [2024-12-05 14:03:02.383375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.009 [2024-12-05 14:03:02.383394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.009 [2024-12-05 14:03:02.383401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.009 [2024-12-05 14:03:02.383562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.009 [2024-12-05 14:03:02.383722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.009 [2024-12-05 14:03:02.383731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.009 [2024-12-05 14:03:02.383737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.009 [2024-12-05 14:03:02.383743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.009 [2024-12-05 14:03:02.395780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.009 [2024-12-05 14:03:02.396139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.009 [2024-12-05 14:03:02.396157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.009 [2024-12-05 14:03:02.396165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.009 [2024-12-05 14:03:02.396333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.009 [2024-12-05 14:03:02.396509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.009 [2024-12-05 14:03:02.396519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.009 [2024-12-05 14:03:02.396526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.009 [2024-12-05 14:03:02.396533] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.009 [2024-12-05 14:03:02.408872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.009 [2024-12-05 14:03:02.409245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.009 [2024-12-05 14:03:02.409279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.009 [2024-12-05 14:03:02.409288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.009 [2024-12-05 14:03:02.409469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.009 [2024-12-05 14:03:02.409644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.009 [2024-12-05 14:03:02.409655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.009 [2024-12-05 14:03:02.409662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.009 [2024-12-05 14:03:02.409674] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.009 [2024-12-05 14:03:02.422048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.009 [2024-12-05 14:03:02.422493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.009 [2024-12-05 14:03:02.422538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.009 [2024-12-05 14:03:02.422563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.009 [2024-12-05 14:03:02.423129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.009 [2024-12-05 14:03:02.423290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.009 [2024-12-05 14:03:02.423300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.009 [2024-12-05 14:03:02.423306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.009 [2024-12-05 14:03:02.423313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.009 [2024-12-05 14:03:02.434911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.009 [2024-12-05 14:03:02.435312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.009 [2024-12-05 14:03:02.435357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.009 [2024-12-05 14:03:02.435396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.009 [2024-12-05 14:03:02.435980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.009 [2024-12-05 14:03:02.436406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.009 [2024-12-05 14:03:02.436416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.009 [2024-12-05 14:03:02.436423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.009 [2024-12-05 14:03:02.436429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.009 [2024-12-05 14:03:02.447715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.009 [2024-12-05 14:03:02.448129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.009 [2024-12-05 14:03:02.448177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.009 [2024-12-05 14:03:02.448202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.009 [2024-12-05 14:03:02.448800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.010 [2024-12-05 14:03:02.449400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.010 [2024-12-05 14:03:02.449410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.010 [2024-12-05 14:03:02.449418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.010 [2024-12-05 14:03:02.449424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.010 [2024-12-05 14:03:02.460560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.010 [2024-12-05 14:03:02.460973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.010 [2024-12-05 14:03:02.460990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.010 [2024-12-05 14:03:02.460997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.010 [2024-12-05 14:03:02.461157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.010 [2024-12-05 14:03:02.461317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.010 [2024-12-05 14:03:02.461326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.010 [2024-12-05 14:03:02.461333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.010 [2024-12-05 14:03:02.461339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.010 [2024-12-05 14:03:02.473341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.010 [2024-12-05 14:03:02.473762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.010 [2024-12-05 14:03:02.473812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.010 [2024-12-05 14:03:02.473836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.010 [2024-12-05 14:03:02.474394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.010 [2024-12-05 14:03:02.474556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.010 [2024-12-05 14:03:02.474567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.010 [2024-12-05 14:03:02.474573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.010 [2024-12-05 14:03:02.474579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.010 [2024-12-05 14:03:02.486113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.010 [2024-12-05 14:03:02.486461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.010 [2024-12-05 14:03:02.486478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.010 [2024-12-05 14:03:02.486486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.010 [2024-12-05 14:03:02.486645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.010 [2024-12-05 14:03:02.486804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.010 [2024-12-05 14:03:02.486813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.010 [2024-12-05 14:03:02.486820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.010 [2024-12-05 14:03:02.486827] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.010 [2024-12-05 14:03:02.499043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.010 [2024-12-05 14:03:02.499467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.010 [2024-12-05 14:03:02.499521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.010 [2024-12-05 14:03:02.499553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.010 [2024-12-05 14:03:02.500136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.010 [2024-12-05 14:03:02.500657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.010 [2024-12-05 14:03:02.500667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.010 [2024-12-05 14:03:02.500674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.010 [2024-12-05 14:03:02.500681] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.010 [2024-12-05 14:03:02.511928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.010 [2024-12-05 14:03:02.512349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.010 [2024-12-05 14:03:02.512398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.010 [2024-12-05 14:03:02.512425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.010 [2024-12-05 14:03:02.513009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.010 [2024-12-05 14:03:02.513519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.010 [2024-12-05 14:03:02.513530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.010 [2024-12-05 14:03:02.513536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.010 [2024-12-05 14:03:02.513542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.010 [2024-12-05 14:03:02.524783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.010 [2024-12-05 14:03:02.525214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.010 [2024-12-05 14:03:02.525258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.010 [2024-12-05 14:03:02.525282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.010 [2024-12-05 14:03:02.525799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.010 [2024-12-05 14:03:02.526167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.010 [2024-12-05 14:03:02.526184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.010 [2024-12-05 14:03:02.526199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.010 [2024-12-05 14:03:02.526212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.010 [2024-12-05 14:03:02.539349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.010 [2024-12-05 14:03:02.539856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.010 [2024-12-05 14:03:02.539879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.010 [2024-12-05 14:03:02.539890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.010 [2024-12-05 14:03:02.540135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.010 [2024-12-05 14:03:02.540389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.011 [2024-12-05 14:03:02.540406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.011 [2024-12-05 14:03:02.540416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.011 [2024-12-05 14:03:02.540426] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.011 [2024-12-05 14:03:02.552354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.011 [2024-12-05 14:03:02.552806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.011 [2024-12-05 14:03:02.552861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.011 [2024-12-05 14:03:02.552886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.011 [2024-12-05 14:03:02.553454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.011 [2024-12-05 14:03:02.553625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.011 [2024-12-05 14:03:02.553635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.011 [2024-12-05 14:03:02.553642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.011 [2024-12-05 14:03:02.553649] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.011 [2024-12-05 14:03:02.565185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.011 [2024-12-05 14:03:02.565603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.011 [2024-12-05 14:03:02.565648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.011 [2024-12-05 14:03:02.565673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.011 [2024-12-05 14:03:02.566255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.011 [2024-12-05 14:03:02.566575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.011 [2024-12-05 14:03:02.566595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.011 [2024-12-05 14:03:02.566609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.011 [2024-12-05 14:03:02.566624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.011 [2024-12-05 14:03:02.580240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.011 [2024-12-05 14:03:02.580770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.011 [2024-12-05 14:03:02.580794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.011 [2024-12-05 14:03:02.580804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.011 [2024-12-05 14:03:02.581060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.011 [2024-12-05 14:03:02.581316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.011 [2024-12-05 14:03:02.581329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.011 [2024-12-05 14:03:02.581339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.011 [2024-12-05 14:03:02.581354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.368 [2024-12-05 14:03:02.593398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.368 [2024-12-05 14:03:02.593852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.368 [2024-12-05 14:03:02.593872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.368 [2024-12-05 14:03:02.593881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.368 [2024-12-05 14:03:02.594078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.368 [2024-12-05 14:03:02.594287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.368 [2024-12-05 14:03:02.594297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.368 [2024-12-05 14:03:02.594305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.368 [2024-12-05 14:03:02.594312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.368 [2024-12-05 14:03:02.606439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.368 [2024-12-05 14:03:02.606811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.368 [2024-12-05 14:03:02.606830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.368 [2024-12-05 14:03:02.606837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.368 [2024-12-05 14:03:02.607012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.368 [2024-12-05 14:03:02.607185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.368 [2024-12-05 14:03:02.607196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.368 [2024-12-05 14:03:02.607203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.368 [2024-12-05 14:03:02.607210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.368 [2024-12-05 14:03:02.619417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.368 [2024-12-05 14:03:02.619840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.368 [2024-12-05 14:03:02.619882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.368 [2024-12-05 14:03:02.619908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.368 [2024-12-05 14:03:02.620506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.368 [2024-12-05 14:03:02.620696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.368 [2024-12-05 14:03:02.620707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.368 [2024-12-05 14:03:02.620713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.368 [2024-12-05 14:03:02.620720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.369 9544.33 IOPS, 37.28 MiB/s [2024-12-05T13:03:02.956Z] [2024-12-05 14:03:02.632421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.369 [2024-12-05 14:03:02.632771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.369 [2024-12-05 14:03:02.632788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.369 [2024-12-05 14:03:02.632795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.369 [2024-12-05 14:03:02.632955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.369 [2024-12-05 14:03:02.633116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.369 [2024-12-05 14:03:02.633125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.369 [2024-12-05 14:03:02.633131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.369 [2024-12-05 14:03:02.633138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.369 [2024-12-05 14:03:02.645286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.369 [2024-12-05 14:03:02.645691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.369 [2024-12-05 14:03:02.645737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.369 [2024-12-05 14:03:02.645760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.369 [2024-12-05 14:03:02.646345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.369 [2024-12-05 14:03:02.646885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.369 [2024-12-05 14:03:02.646896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.369 [2024-12-05 14:03:02.646903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.369 [2024-12-05 14:03:02.646926] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.369 [2024-12-05 14:03:02.660362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.369 [2024-12-05 14:03:02.660894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.369 [2024-12-05 14:03:02.660939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.369 [2024-12-05 14:03:02.660962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.369 [2024-12-05 14:03:02.661562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.369 [2024-12-05 14:03:02.662012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.369 [2024-12-05 14:03:02.662025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.369 [2024-12-05 14:03:02.662035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.369 [2024-12-05 14:03:02.662045] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.369 [2024-12-05 14:03:02.673391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.369 [2024-12-05 14:03:02.673790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.369 [2024-12-05 14:03:02.673808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.369 [2024-12-05 14:03:02.673819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.369 [2024-12-05 14:03:02.673988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.369 [2024-12-05 14:03:02.674158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.369 [2024-12-05 14:03:02.674168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.369 [2024-12-05 14:03:02.674175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.369 [2024-12-05 14:03:02.674181] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.369 [2024-12-05 14:03:02.686246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.369 [2024-12-05 14:03:02.686632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.369 [2024-12-05 14:03:02.686668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.369 [2024-12-05 14:03:02.686694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.369 [2024-12-05 14:03:02.687279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.369 [2024-12-05 14:03:02.687706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.369 [2024-12-05 14:03:02.687726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.369 [2024-12-05 14:03:02.687740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.369 [2024-12-05 14:03:02.687755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.369 [2024-12-05 14:03:02.701356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.369 [2024-12-05 14:03:02.701802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.369 [2024-12-05 14:03:02.701825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.369 [2024-12-05 14:03:02.701836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.369 [2024-12-05 14:03:02.702091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.369 [2024-12-05 14:03:02.702347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.369 [2024-12-05 14:03:02.702360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.369 [2024-12-05 14:03:02.702378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.369 [2024-12-05 14:03:02.702388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.369 [2024-12-05 14:03:02.714396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.369 [2024-12-05 14:03:02.714801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.369 [2024-12-05 14:03:02.714818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.369 [2024-12-05 14:03:02.714825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.369 [2024-12-05 14:03:02.714994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.369 [2024-12-05 14:03:02.715173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.369 [2024-12-05 14:03:02.715185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.369 [2024-12-05 14:03:02.715191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.369 [2024-12-05 14:03:02.715198] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.370 [2024-12-05 14:03:02.727211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.370 [2024-12-05 14:03:02.727623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.370 [2024-12-05 14:03:02.727640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.370 [2024-12-05 14:03:02.727648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.370 [2024-12-05 14:03:02.727808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.370 [2024-12-05 14:03:02.727968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.370 [2024-12-05 14:03:02.727977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.370 [2024-12-05 14:03:02.727984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.370 [2024-12-05 14:03:02.727991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.370 [2024-12-05 14:03:02.740070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.370 [2024-12-05 14:03:02.740396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.370 [2024-12-05 14:03:02.740414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.370 [2024-12-05 14:03:02.740422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.370 [2024-12-05 14:03:02.740582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.370 [2024-12-05 14:03:02.740742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.370 [2024-12-05 14:03:02.740751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.370 [2024-12-05 14:03:02.740757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.370 [2024-12-05 14:03:02.740763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.370 [2024-12-05 14:03:02.752898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.370 [2024-12-05 14:03:02.753326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.370 [2024-12-05 14:03:02.753383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.370 [2024-12-05 14:03:02.753409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.370 [2024-12-05 14:03:02.753991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.370 [2024-12-05 14:03:02.754378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.370 [2024-12-05 14:03:02.754389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.370 [2024-12-05 14:03:02.754399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.370 [2024-12-05 14:03:02.754406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.370 [2024-12-05 14:03:02.765664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.370 [2024-12-05 14:03:02.766089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.370 [2024-12-05 14:03:02.766135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.370 [2024-12-05 14:03:02.766159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.370 [2024-12-05 14:03:02.766759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.370 [2024-12-05 14:03:02.767299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.370 [2024-12-05 14:03:02.767308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.370 [2024-12-05 14:03:02.767315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.370 [2024-12-05 14:03:02.767322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.370 [2024-12-05 14:03:02.778419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.370 [2024-12-05 14:03:02.778759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.370 [2024-12-05 14:03:02.778776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.370 [2024-12-05 14:03:02.778784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.370 [2024-12-05 14:03:02.778943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.370 [2024-12-05 14:03:02.779103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.370 [2024-12-05 14:03:02.779113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.370 [2024-12-05 14:03:02.779120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.370 [2024-12-05 14:03:02.779126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.370 [2024-12-05 14:03:02.791229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.370 [2024-12-05 14:03:02.791580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.370 [2024-12-05 14:03:02.791597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.370 [2024-12-05 14:03:02.791605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.370 [2024-12-05 14:03:02.791764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.370 [2024-12-05 14:03:02.791924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.370 [2024-12-05 14:03:02.791934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.370 [2024-12-05 14:03:02.791940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.370 [2024-12-05 14:03:02.791947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.370 [2024-12-05 14:03:02.804078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.370 [2024-12-05 14:03:02.804480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.370 [2024-12-05 14:03:02.804497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.370 [2024-12-05 14:03:02.804504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.370 [2024-12-05 14:03:02.804663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.370 [2024-12-05 14:03:02.804823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.370 [2024-12-05 14:03:02.804833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.370 [2024-12-05 14:03:02.804839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.370 [2024-12-05 14:03:02.804846] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.371 [2024-12-05 14:03:02.816827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.371 [2024-12-05 14:03:02.817260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.371 [2024-12-05 14:03:02.817307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.371 [2024-12-05 14:03:02.817331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.371 [2024-12-05 14:03:02.817928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.371 [2024-12-05 14:03:02.818426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.371 [2024-12-05 14:03:02.818436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.371 [2024-12-05 14:03:02.818442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.371 [2024-12-05 14:03:02.818449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.371 [2024-12-05 14:03:02.829679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.371 [2024-12-05 14:03:02.830096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.371 [2024-12-05 14:03:02.830113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.371 [2024-12-05 14:03:02.830121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.371 [2024-12-05 14:03:02.830280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.371 [2024-12-05 14:03:02.830447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.371 [2024-12-05 14:03:02.830457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.371 [2024-12-05 14:03:02.830463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.371 [2024-12-05 14:03:02.830470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.371 [2024-12-05 14:03:02.842464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.371 [2024-12-05 14:03:02.842881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.371 [2024-12-05 14:03:02.842926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.371 [2024-12-05 14:03:02.842957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.371 [2024-12-05 14:03:02.843363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.371 [2024-12-05 14:03:02.843533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.371 [2024-12-05 14:03:02.843542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.371 [2024-12-05 14:03:02.843548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.371 [2024-12-05 14:03:02.843554] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.371 [2024-12-05 14:03:02.855232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.371 [2024-12-05 14:03:02.855559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.371 [2024-12-05 14:03:02.855605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.371 [2024-12-05 14:03:02.855629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.371 [2024-12-05 14:03:02.856149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.371 [2024-12-05 14:03:02.856310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.371 [2024-12-05 14:03:02.856320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.371 [2024-12-05 14:03:02.856327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.371 [2024-12-05 14:03:02.856333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.371 [2024-12-05 14:03:02.868238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.371 [2024-12-05 14:03:02.868676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.371 [2024-12-05 14:03:02.868694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.371 [2024-12-05 14:03:02.868702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.371 [2024-12-05 14:03:02.868876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.371 [2024-12-05 14:03:02.869050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.371 [2024-12-05 14:03:02.869060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.371 [2024-12-05 14:03:02.869067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.371 [2024-12-05 14:03:02.869073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.371 [2024-12-05 14:03:02.881322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.371 [2024-12-05 14:03:02.881675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.371 [2024-12-05 14:03:02.881693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.371 [2024-12-05 14:03:02.881701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.371 [2024-12-05 14:03:02.881875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.371 [2024-12-05 14:03:02.882054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.371 [2024-12-05 14:03:02.882064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.371 [2024-12-05 14:03:02.882071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.371 [2024-12-05 14:03:02.882078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.371 [2024-12-05 14:03:02.894361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.371 [2024-12-05 14:03:02.894786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.371 [2024-12-05 14:03:02.894804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.371 [2024-12-05 14:03:02.894812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.371 [2024-12-05 14:03:02.894980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.371 [2024-12-05 14:03:02.895149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.371 [2024-12-05 14:03:02.895159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.371 [2024-12-05 14:03:02.895165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.372 [2024-12-05 14:03:02.895172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.372 [2024-12-05 14:03:02.907152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.372 [2024-12-05 14:03:02.907503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.372 [2024-12-05 14:03:02.907521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.372 [2024-12-05 14:03:02.907529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.372 [2024-12-05 14:03:02.907699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.372 [2024-12-05 14:03:02.907867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.372 [2024-12-05 14:03:02.907877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.372 [2024-12-05 14:03:02.907885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.372 [2024-12-05 14:03:02.907891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.372 [2024-12-05 14:03:02.920235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.372 [2024-12-05 14:03:02.920570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.372 [2024-12-05 14:03:02.920589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.372 [2024-12-05 14:03:02.920597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.372 [2024-12-05 14:03:02.920770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.372 [2024-12-05 14:03:02.920945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.372 [2024-12-05 14:03:02.920955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.372 [2024-12-05 14:03:02.920966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.372 [2024-12-05 14:03:02.920974] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.372 [2024-12-05 14:03:02.933227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.372 [2024-12-05 14:03:02.933640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.372 [2024-12-05 14:03:02.933659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.372 [2024-12-05 14:03:02.933667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.372 [2024-12-05 14:03:02.933841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.372 [2024-12-05 14:03:02.934015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.372 [2024-12-05 14:03:02.934025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.372 [2024-12-05 14:03:02.934032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.372 [2024-12-05 14:03:02.934039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.647 [2024-12-05 14:03:02.946246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.647 [2024-12-05 14:03:02.946660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.647 [2024-12-05 14:03:02.946678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.647 [2024-12-05 14:03:02.946686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.647 [2024-12-05 14:03:02.946859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.647 [2024-12-05 14:03:02.947034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.647 [2024-12-05 14:03:02.947044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.647 [2024-12-05 14:03:02.947051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.647 [2024-12-05 14:03:02.947058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.647 [2024-12-05 14:03:02.959305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.647 [2024-12-05 14:03:02.959744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.647 [2024-12-05 14:03:02.959762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.647 [2024-12-05 14:03:02.959769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.647 [2024-12-05 14:03:02.959944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.647 [2024-12-05 14:03:02.960117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.647 [2024-12-05 14:03:02.960127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.647 [2024-12-05 14:03:02.960134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.647 [2024-12-05 14:03:02.960141] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.647 [2024-12-05 14:03:02.972420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.647 [2024-12-05 14:03:02.972831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.647 [2024-12-05 14:03:02.972849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.647 [2024-12-05 14:03:02.972857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.647 [2024-12-05 14:03:02.973031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.647 [2024-12-05 14:03:02.973206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.647 [2024-12-05 14:03:02.973216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.647 [2024-12-05 14:03:02.973223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.647 [2024-12-05 14:03:02.973230] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.647 [2024-12-05 14:03:02.985485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.647 [2024-12-05 14:03:02.985893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.647 [2024-12-05 14:03:02.985911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.647 [2024-12-05 14:03:02.985919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.647 [2024-12-05 14:03:02.986088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.647 [2024-12-05 14:03:02.986258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.647 [2024-12-05 14:03:02.986267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.647 [2024-12-05 14:03:02.986274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.647 [2024-12-05 14:03:02.986281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.647 [2024-12-05 14:03:02.998312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.647 [2024-12-05 14:03:02.998647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.647 [2024-12-05 14:03:02.998664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.647 [2024-12-05 14:03:02.998672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.647 [2024-12-05 14:03:02.998832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.647 [2024-12-05 14:03:02.998992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.647 [2024-12-05 14:03:02.999001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.648 [2024-12-05 14:03:02.999008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.648 [2024-12-05 14:03:02.999014] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.648 [2024-12-05 14:03:03.011170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.648 [2024-12-05 14:03:03.011519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.648 [2024-12-05 14:03:03.011538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.648 [2024-12-05 14:03:03.011548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.648 [2024-12-05 14:03:03.011709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.648 [2024-12-05 14:03:03.011870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.648 [2024-12-05 14:03:03.011879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.648 [2024-12-05 14:03:03.011886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.648 [2024-12-05 14:03:03.011892] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.648 [2024-12-05 14:03:03.023996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.648 [2024-12-05 14:03:03.024422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.648 [2024-12-05 14:03:03.024468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.648 [2024-12-05 14:03:03.024491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.648 [2024-12-05 14:03:03.024899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.648 [2024-12-05 14:03:03.025061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.648 [2024-12-05 14:03:03.025070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.648 [2024-12-05 14:03:03.025076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.648 [2024-12-05 14:03:03.025082] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.648 [2024-12-05 14:03:03.036864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.648 [2024-12-05 14:03:03.037278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.648 [2024-12-05 14:03:03.037295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.648 [2024-12-05 14:03:03.037303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.648 [2024-12-05 14:03:03.037490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.648 [2024-12-05 14:03:03.037660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.648 [2024-12-05 14:03:03.037669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.648 [2024-12-05 14:03:03.037676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.648 [2024-12-05 14:03:03.037682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.648 [2024-12-05 14:03:03.049736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.648 [2024-12-05 14:03:03.050149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.648 [2024-12-05 14:03:03.050166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.648 [2024-12-05 14:03:03.050173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.648 [2024-12-05 14:03:03.050333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.648 [2024-12-05 14:03:03.050504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.648 [2024-12-05 14:03:03.050514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.648 [2024-12-05 14:03:03.050520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.648 [2024-12-05 14:03:03.050527] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.648 [2024-12-05 14:03:03.062466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.648 [2024-12-05 14:03:03.062805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.648 [2024-12-05 14:03:03.062822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.648 [2024-12-05 14:03:03.062829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.648 [2024-12-05 14:03:03.062988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.648 [2024-12-05 14:03:03.063148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.648 [2024-12-05 14:03:03.063157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.648 [2024-12-05 14:03:03.063163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.648 [2024-12-05 14:03:03.063170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.648 [2024-12-05 14:03:03.075321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.648 [2024-12-05 14:03:03.075594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.648 [2024-12-05 14:03:03.075612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.648 [2024-12-05 14:03:03.075619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.648 [2024-12-05 14:03:03.075778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.648 [2024-12-05 14:03:03.075938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.648 [2024-12-05 14:03:03.075948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.648 [2024-12-05 14:03:03.075955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.648 [2024-12-05 14:03:03.075961] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.648 [2024-12-05 14:03:03.088160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.648 [2024-12-05 14:03:03.088598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.648 [2024-12-05 14:03:03.088617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.648 [2024-12-05 14:03:03.088625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.648 [2024-12-05 14:03:03.088800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.648 [2024-12-05 14:03:03.088974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.648 [2024-12-05 14:03:03.088983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.648 [2024-12-05 14:03:03.088994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.648 [2024-12-05 14:03:03.089002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.648 [2024-12-05 14:03:03.101020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.648 [2024-12-05 14:03:03.101443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.648 [2024-12-05 14:03:03.101489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.648 [2024-12-05 14:03:03.101513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.648 [2024-12-05 14:03:03.101752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.648 [2024-12-05 14:03:03.101913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.648 [2024-12-05 14:03:03.101923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.648 [2024-12-05 14:03:03.101929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.648 [2024-12-05 14:03:03.101935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.648 [2024-12-05 14:03:03.113790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.648 [2024-12-05 14:03:03.114208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.648 [2024-12-05 14:03:03.114225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.648 [2024-12-05 14:03:03.114233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.648 [2024-12-05 14:03:03.114401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.648 [2024-12-05 14:03:03.114561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.648 [2024-12-05 14:03:03.114570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.648 [2024-12-05 14:03:03.114577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.648 [2024-12-05 14:03:03.114584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.648 [2024-12-05 14:03:03.126528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.648 [2024-12-05 14:03:03.126863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.648 [2024-12-05 14:03:03.126880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.649 [2024-12-05 14:03:03.126888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.649 [2024-12-05 14:03:03.127049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.649 [2024-12-05 14:03:03.127209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.649 [2024-12-05 14:03:03.127218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.649 [2024-12-05 14:03:03.127225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.649 [2024-12-05 14:03:03.127232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.649 [2024-12-05 14:03:03.139323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.649 [2024-12-05 14:03:03.139738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.649 [2024-12-05 14:03:03.139755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.649 [2024-12-05 14:03:03.139762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.649 [2024-12-05 14:03:03.139921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.649 [2024-12-05 14:03:03.140081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.649 [2024-12-05 14:03:03.140090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.649 [2024-12-05 14:03:03.140097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.649 [2024-12-05 14:03:03.140103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.649 [2024-12-05 14:03:03.152106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.649 [2024-12-05 14:03:03.152441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.649 [2024-12-05 14:03:03.152457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.649 [2024-12-05 14:03:03.152464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.649 [2024-12-05 14:03:03.152625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.649 [2024-12-05 14:03:03.152785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.649 [2024-12-05 14:03:03.152793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.649 [2024-12-05 14:03:03.152800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.649 [2024-12-05 14:03:03.152806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.649 [2024-12-05 14:03:03.164939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.649 [2024-12-05 14:03:03.165351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.649 [2024-12-05 14:03:03.165401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.649 [2024-12-05 14:03:03.165428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.649 [2024-12-05 14:03:03.165975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.649 [2024-12-05 14:03:03.166138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.649 [2024-12-05 14:03:03.166148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.649 [2024-12-05 14:03:03.166154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.649 [2024-12-05 14:03:03.166160] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.649 [2024-12-05 14:03:03.178063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.649 [2024-12-05 14:03:03.178474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.649 [2024-12-05 14:03:03.178493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.649 [2024-12-05 14:03:03.178504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.649 [2024-12-05 14:03:03.178678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.649 [2024-12-05 14:03:03.178852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.649 [2024-12-05 14:03:03.178862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.649 [2024-12-05 14:03:03.178868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.649 [2024-12-05 14:03:03.178875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.649 [2024-12-05 14:03:03.191110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.649 [2024-12-05 14:03:03.191404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.649 [2024-12-05 14:03:03.191423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.649 [2024-12-05 14:03:03.191431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.649 [2024-12-05 14:03:03.191606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.649 [2024-12-05 14:03:03.191780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.649 [2024-12-05 14:03:03.191790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.649 [2024-12-05 14:03:03.191797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.649 [2024-12-05 14:03:03.191804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.649 [2024-12-05 14:03:03.204076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.649 [2024-12-05 14:03:03.204401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.649 [2024-12-05 14:03:03.204420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.649 [2024-12-05 14:03:03.204428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.649 [2024-12-05 14:03:03.204597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.649 [2024-12-05 14:03:03.204766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.649 [2024-12-05 14:03:03.204777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.649 [2024-12-05 14:03:03.204783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.649 [2024-12-05 14:03:03.204790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.649 [2024-12-05 14:03:03.216861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.649 [2024-12-05 14:03:03.217201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.649 [2024-12-05 14:03:03.217220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.649 [2024-12-05 14:03:03.217227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.649 [2024-12-05 14:03:03.217401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.649 [2024-12-05 14:03:03.217575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.649 [2024-12-05 14:03:03.217585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.649 [2024-12-05 14:03:03.217591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.649 [2024-12-05 14:03:03.217598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.649 [2024-12-05 14:03:03.230075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.649 [2024-12-05 14:03:03.230376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.649 [2024-12-05 14:03:03.230396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.649 [2024-12-05 14:03:03.230405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.650 [2024-12-05 14:03:03.230598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.650 [2024-12-05 14:03:03.230783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.650 [2024-12-05 14:03:03.230793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.650 [2024-12-05 14:03:03.230800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.650 [2024-12-05 14:03:03.230808] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.909 [2024-12-05 14:03:03.243105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.909 [2024-12-05 14:03:03.243436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.909 [2024-12-05 14:03:03.243454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.909 [2024-12-05 14:03:03.243462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.909 [2024-12-05 14:03:03.243630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.909 [2024-12-05 14:03:03.243799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.909 [2024-12-05 14:03:03.243808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.909 [2024-12-05 14:03:03.243814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.909 [2024-12-05 14:03:03.243821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.909 [2024-12-05 14:03:03.255846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.909 [2024-12-05 14:03:03.256169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.909 [2024-12-05 14:03:03.256186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.909 [2024-12-05 14:03:03.256193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.909 [2024-12-05 14:03:03.256353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.909 [2024-12-05 14:03:03.256518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.909 [2024-12-05 14:03:03.256528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.909 [2024-12-05 14:03:03.256535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.909 [2024-12-05 14:03:03.256547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.909 [2024-12-05 14:03:03.268692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.909 [2024-12-05 14:03:03.269086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.909 [2024-12-05 14:03:03.269104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.909 [2024-12-05 14:03:03.269111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.909 [2024-12-05 14:03:03.269271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.909 [2024-12-05 14:03:03.269436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.909 [2024-12-05 14:03:03.269447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.909 [2024-12-05 14:03:03.269453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.909 [2024-12-05 14:03:03.269460] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.909 [2024-12-05 14:03:03.281457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.909 [2024-12-05 14:03:03.281792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.909 [2024-12-05 14:03:03.281809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.909 [2024-12-05 14:03:03.281816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.909 [2024-12-05 14:03:03.281977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.909 [2024-12-05 14:03:03.282137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.909 [2024-12-05 14:03:03.282146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.909 [2024-12-05 14:03:03.282152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.909 [2024-12-05 14:03:03.282158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.909 [2024-12-05 14:03:03.294250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.909 [2024-12-05 14:03:03.294623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.909 [2024-12-05 14:03:03.294640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.909 [2024-12-05 14:03:03.294647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.909 [2024-12-05 14:03:03.294807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.909 [2024-12-05 14:03:03.294967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.909 [2024-12-05 14:03:03.294976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.910 [2024-12-05 14:03:03.294982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.910 [2024-12-05 14:03:03.294989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.910 [2024-12-05 14:03:03.307065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.910 [2024-12-05 14:03:03.307443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.910 [2024-12-05 14:03:03.307460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.910 [2024-12-05 14:03:03.307468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.910 [2024-12-05 14:03:03.307628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.910 [2024-12-05 14:03:03.307788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.910 [2024-12-05 14:03:03.307797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.910 [2024-12-05 14:03:03.307804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.910 [2024-12-05 14:03:03.307811] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.910 [2024-12-05 14:03:03.319809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.910 [2024-12-05 14:03:03.320144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.910 [2024-12-05 14:03:03.320190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.910 [2024-12-05 14:03:03.320215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.910 [2024-12-05 14:03:03.320664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.910 [2024-12-05 14:03:03.320826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.910 [2024-12-05 14:03:03.320836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.910 [2024-12-05 14:03:03.320842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.910 [2024-12-05 14:03:03.320849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.910 [2024-12-05 14:03:03.332575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.910 [2024-12-05 14:03:03.332831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.910 [2024-12-05 14:03:03.332864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.910 [2024-12-05 14:03:03.332871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.910 [2024-12-05 14:03:03.333040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.910 [2024-12-05 14:03:03.333210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.910 [2024-12-05 14:03:03.333220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.910 [2024-12-05 14:03:03.333227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.910 [2024-12-05 14:03:03.333233] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.910 [2024-12-05 14:03:03.345426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.910 [2024-12-05 14:03:03.345700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.910 [2024-12-05 14:03:03.345715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.910 [2024-12-05 14:03:03.345725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.910 [2024-12-05 14:03:03.345885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.910 [2024-12-05 14:03:03.346045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.910 [2024-12-05 14:03:03.346054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.910 [2024-12-05 14:03:03.346060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.910 [2024-12-05 14:03:03.346066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.910 [2024-12-05 14:03:03.358199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.910 [2024-12-05 14:03:03.358549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.910 [2024-12-05 14:03:03.358566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.910 [2024-12-05 14:03:03.358573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.910 [2024-12-05 14:03:03.358732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.910 [2024-12-05 14:03:03.358893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.910 [2024-12-05 14:03:03.358903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.910 [2024-12-05 14:03:03.358909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.910 [2024-12-05 14:03:03.358915] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.910 [2024-12-05 14:03:03.371060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.910 [2024-12-05 14:03:03.371377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.910 [2024-12-05 14:03:03.371394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.910 [2024-12-05 14:03:03.371402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.910 [2024-12-05 14:03:03.371561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.910 [2024-12-05 14:03:03.371722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.910 [2024-12-05 14:03:03.371732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.910 [2024-12-05 14:03:03.371738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.910 [2024-12-05 14:03:03.371744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.910 [2024-12-05 14:03:03.383962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.910 [2024-12-05 14:03:03.384249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.910 [2024-12-05 14:03:03.384265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.911 [2024-12-05 14:03:03.384273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.911 [2024-12-05 14:03:03.384438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.911 [2024-12-05 14:03:03.384602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.911 [2024-12-05 14:03:03.384611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.911 [2024-12-05 14:03:03.384618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.911 [2024-12-05 14:03:03.384624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.911 [2024-12-05 14:03:03.396802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.911 [2024-12-05 14:03:03.397157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.911 [2024-12-05 14:03:03.397175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.911 [2024-12-05 14:03:03.397182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.911 [2024-12-05 14:03:03.397342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.911 [2024-12-05 14:03:03.397509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.911 [2024-12-05 14:03:03.397519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.911 [2024-12-05 14:03:03.397525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.911 [2024-12-05 14:03:03.397532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.911 [2024-12-05 14:03:03.409675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.911 [2024-12-05 14:03:03.410086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.911 [2024-12-05 14:03:03.410123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.911 [2024-12-05 14:03:03.410150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.911 [2024-12-05 14:03:03.410723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.911 [2024-12-05 14:03:03.410885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.911 [2024-12-05 14:03:03.410894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.911 [2024-12-05 14:03:03.410901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.911 [2024-12-05 14:03:03.410909] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.911 [2024-12-05 14:03:03.422460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.911 [2024-12-05 14:03:03.422788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.911 [2024-12-05 14:03:03.422834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.911 [2024-12-05 14:03:03.422859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.911 [2024-12-05 14:03:03.423460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.911 [2024-12-05 14:03:03.423650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.911 [2024-12-05 14:03:03.423661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.911 [2024-12-05 14:03:03.423670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.911 [2024-12-05 14:03:03.423682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.911 [2024-12-05 14:03:03.435598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.911 [2024-12-05 14:03:03.435890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.911 [2024-12-05 14:03:03.435909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.911 [2024-12-05 14:03:03.435917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.911 [2024-12-05 14:03:03.436102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.911 [2024-12-05 14:03:03.436288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.911 [2024-12-05 14:03:03.436298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.911 [2024-12-05 14:03:03.436305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.911 [2024-12-05 14:03:03.436313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.911 [2024-12-05 14:03:03.448601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.911 [2024-12-05 14:03:03.448893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.911 [2024-12-05 14:03:03.448912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.911 [2024-12-05 14:03:03.448921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.911 [2024-12-05 14:03:03.449095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.911 [2024-12-05 14:03:03.449270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.911 [2024-12-05 14:03:03.449279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.911 [2024-12-05 14:03:03.449286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.911 [2024-12-05 14:03:03.449294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.911 [2024-12-05 14:03:03.461500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.911 [2024-12-05 14:03:03.461783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.911 [2024-12-05 14:03:03.461801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.911 [2024-12-05 14:03:03.461808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.911 [2024-12-05 14:03:03.461977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.911 [2024-12-05 14:03:03.462146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.911 [2024-12-05 14:03:03.462155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.911 [2024-12-05 14:03:03.462162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.911 [2024-12-05 14:03:03.462168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.911 [2024-12-05 14:03:03.474431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.912 [2024-12-05 14:03:03.474753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.912 [2024-12-05 14:03:03.474770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.912 [2024-12-05 14:03:03.474777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.912 [2024-12-05 14:03:03.474937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.912 [2024-12-05 14:03:03.475098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.912 [2024-12-05 14:03:03.475107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.912 [2024-12-05 14:03:03.475114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.912 [2024-12-05 14:03:03.475120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:20.912 [2024-12-05 14:03:03.487206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:20.912 [2024-12-05 14:03:03.487540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.912 [2024-12-05 14:03:03.487557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:20.912 [2024-12-05 14:03:03.487564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:20.912 [2024-12-05 14:03:03.487725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:20.912 [2024-12-05 14:03:03.487885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:20.912 [2024-12-05 14:03:03.487894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:20.912 [2024-12-05 14:03:03.487900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:20.912 [2024-12-05 14:03:03.487906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.171 [2024-12-05 14:03:03.500202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.171 [2024-12-05 14:03:03.500549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.171 [2024-12-05 14:03:03.500567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.171 [2024-12-05 14:03:03.500575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.171 [2024-12-05 14:03:03.500748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.171 [2024-12-05 14:03:03.500923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.171 [2024-12-05 14:03:03.500932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.171 [2024-12-05 14:03:03.500939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.171 [2024-12-05 14:03:03.500946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.171 [2024-12-05 14:03:03.512995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.171 [2024-12-05 14:03:03.513317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.171 [2024-12-05 14:03:03.513333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.171 [2024-12-05 14:03:03.513343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.171 [2024-12-05 14:03:03.513510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.172 [2024-12-05 14:03:03.513669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.172 [2024-12-05 14:03:03.513678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.172 [2024-12-05 14:03:03.513684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.172 [2024-12-05 14:03:03.513691] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.172 [2024-12-05 14:03:03.525832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.172 [2024-12-05 14:03:03.526092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.172 [2024-12-05 14:03:03.526109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.172 [2024-12-05 14:03:03.526116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.172 [2024-12-05 14:03:03.526275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.172 [2024-12-05 14:03:03.526440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.172 [2024-12-05 14:03:03.526450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.172 [2024-12-05 14:03:03.526457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.172 [2024-12-05 14:03:03.526464] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.172 [2024-12-05 14:03:03.538776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.172 [2024-12-05 14:03:03.539052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.172 [2024-12-05 14:03:03.539068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.172 [2024-12-05 14:03:03.539075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.172 [2024-12-05 14:03:03.539235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.172 [2024-12-05 14:03:03.539399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.172 [2024-12-05 14:03:03.539409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.172 [2024-12-05 14:03:03.539415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.172 [2024-12-05 14:03:03.539423] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.172 [2024-12-05 14:03:03.551572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.172 [2024-12-05 14:03:03.551890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.172 [2024-12-05 14:03:03.551906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.172 [2024-12-05 14:03:03.551913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.172 [2024-12-05 14:03:03.552073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.172 [2024-12-05 14:03:03.552237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.172 [2024-12-05 14:03:03.552246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.172 [2024-12-05 14:03:03.552253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.172 [2024-12-05 14:03:03.552259] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.172 [2024-12-05 14:03:03.564607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.172 [2024-12-05 14:03:03.564942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.172 [2024-12-05 14:03:03.564960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.172 [2024-12-05 14:03:03.564969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.172 [2024-12-05 14:03:03.565142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.172 [2024-12-05 14:03:03.565318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.172 [2024-12-05 14:03:03.565329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.172 [2024-12-05 14:03:03.565336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.172 [2024-12-05 14:03:03.565343] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.172 [2024-12-05 14:03:03.577427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.172 [2024-12-05 14:03:03.577689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.172 [2024-12-05 14:03:03.577705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.172 [2024-12-05 14:03:03.577713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.172 [2024-12-05 14:03:03.577873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.172 [2024-12-05 14:03:03.578033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.172 [2024-12-05 14:03:03.578042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.172 [2024-12-05 14:03:03.578048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.172 [2024-12-05 14:03:03.578054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.172 [2024-12-05 14:03:03.590298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.172 [2024-12-05 14:03:03.590618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.172 [2024-12-05 14:03:03.590635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.172 [2024-12-05 14:03:03.590642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.172 [2024-12-05 14:03:03.590802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.172 [2024-12-05 14:03:03.590962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.172 [2024-12-05 14:03:03.590971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.172 [2024-12-05 14:03:03.590977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.172 [2024-12-05 14:03:03.590988] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.172 [2024-12-05 14:03:03.603094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.172 [2024-12-05 14:03:03.603419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.172 [2024-12-05 14:03:03.603437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.172 [2024-12-05 14:03:03.603444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.172 [2024-12-05 14:03:03.603604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.172 [2024-12-05 14:03:03.603764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.172 [2024-12-05 14:03:03.603774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.172 [2024-12-05 14:03:03.603780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.172 [2024-12-05 14:03:03.603787] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.172 [2024-12-05 14:03:03.615954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.172 [2024-12-05 14:03:03.616218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.172 [2024-12-05 14:03:03.616235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.172 [2024-12-05 14:03:03.616243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.172 [2024-12-05 14:03:03.616408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.172 [2024-12-05 14:03:03.616568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.172 [2024-12-05 14:03:03.616577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.172 [2024-12-05 14:03:03.616584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.172 [2024-12-05 14:03:03.616590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.172 7158.25 IOPS, 27.96 MiB/s [2024-12-05T13:03:03.759Z] [2024-12-05 14:03:03.629922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.172 [2024-12-05 14:03:03.630205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.172 [2024-12-05 14:03:03.630251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.172 [2024-12-05 14:03:03.630275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.172 [2024-12-05 14:03:03.630874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.172 [2024-12-05 14:03:03.631324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.173 [2024-12-05 14:03:03.631333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.173 [2024-12-05 14:03:03.631340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.173 [2024-12-05 14:03:03.631346] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.173 [2024-12-05 14:03:03.642757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.173 [2024-12-05 14:03:03.643112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.173 [2024-12-05 14:03:03.643156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.173 [2024-12-05 14:03:03.643180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.173 [2024-12-05 14:03:03.643778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.173 [2024-12-05 14:03:03.644250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.173 [2024-12-05 14:03:03.644259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.173 [2024-12-05 14:03:03.644266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.173 [2024-12-05 14:03:03.644272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.173 [2024-12-05 14:03:03.655513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.173 [2024-12-05 14:03:03.655942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.173 [2024-12-05 14:03:03.655985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.173 [2024-12-05 14:03:03.656010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.173 [2024-12-05 14:03:03.656376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.173 [2024-12-05 14:03:03.656539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.173 [2024-12-05 14:03:03.656548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.173 [2024-12-05 14:03:03.656554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.173 [2024-12-05 14:03:03.656560] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.173 [2024-12-05 14:03:03.668256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.173 [2024-12-05 14:03:03.668511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.173 [2024-12-05 14:03:03.668528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.173 [2024-12-05 14:03:03.668536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.173 [2024-12-05 14:03:03.668695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.173 [2024-12-05 14:03:03.668855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.173 [2024-12-05 14:03:03.668864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.173 [2024-12-05 14:03:03.668870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.173 [2024-12-05 14:03:03.668877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.173 [2024-12-05 14:03:03.681011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.173 [2024-12-05 14:03:03.681433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.173 [2024-12-05 14:03:03.681478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.173 [2024-12-05 14:03:03.681509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.173 [2024-12-05 14:03:03.682093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.173 [2024-12-05 14:03:03.682592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.173 [2024-12-05 14:03:03.682602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.173 [2024-12-05 14:03:03.682608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.173 [2024-12-05 14:03:03.682615] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.173 [2024-12-05 14:03:03.694198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.173 [2024-12-05 14:03:03.694616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.173 [2024-12-05 14:03:03.694634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.173 [2024-12-05 14:03:03.694642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.173 [2024-12-05 14:03:03.694826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.173 [2024-12-05 14:03:03.694995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.173 [2024-12-05 14:03:03.695005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.173 [2024-12-05 14:03:03.695012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.173 [2024-12-05 14:03:03.695018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.173 [2024-12-05 14:03:03.707016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.173 [2024-12-05 14:03:03.707425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.173 [2024-12-05 14:03:03.707458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.173 [2024-12-05 14:03:03.707482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.173 [2024-12-05 14:03:03.708065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.173 [2024-12-05 14:03:03.708248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.173 [2024-12-05 14:03:03.708257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.173 [2024-12-05 14:03:03.708263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.173 [2024-12-05 14:03:03.708270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.173 [2024-12-05 14:03:03.722131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.173 [2024-12-05 14:03:03.722552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.173 [2024-12-05 14:03:03.722576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.173 [2024-12-05 14:03:03.722587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.173 [2024-12-05 14:03:03.722842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.173 [2024-12-05 14:03:03.723102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.173 [2024-12-05 14:03:03.723116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.173 [2024-12-05 14:03:03.723125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.173 [2024-12-05 14:03:03.723135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.173 [2024-12-05 14:03:03.735126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.173 [2024-12-05 14:03:03.735482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.173 [2024-12-05 14:03:03.735530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.173 [2024-12-05 14:03:03.735554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.173 [2024-12-05 14:03:03.736138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.173 [2024-12-05 14:03:03.736350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.173 [2024-12-05 14:03:03.736359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.173 [2024-12-05 14:03:03.736365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.173 [2024-12-05 14:03:03.736379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.173 [2024-12-05 14:03:03.748008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.173 [2024-12-05 14:03:03.748371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.173 [2024-12-05 14:03:03.748388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.173 [2024-12-05 14:03:03.748396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.173 [2024-12-05 14:03:03.748556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.173 [2024-12-05 14:03:03.748717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.174 [2024-12-05 14:03:03.748726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.174 [2024-12-05 14:03:03.748732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.174 [2024-12-05 14:03:03.748739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.433 [2024-12-05 14:03:03.760885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.433 [2024-12-05 14:03:03.761286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.433 [2024-12-05 14:03:03.761303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.433 [2024-12-05 14:03:03.761312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.433 [2024-12-05 14:03:03.761491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.433 [2024-12-05 14:03:03.761667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.433 [2024-12-05 14:03:03.761677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.433 [2024-12-05 14:03:03.761687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.433 [2024-12-05 14:03:03.761695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.433 [2024-12-05 14:03:03.773615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.433 [2024-12-05 14:03:03.773942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.433 [2024-12-05 14:03:03.773958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.433 [2024-12-05 14:03:03.773966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.433 [2024-12-05 14:03:03.774125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.433 [2024-12-05 14:03:03.774285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.433 [2024-12-05 14:03:03.774294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.433 [2024-12-05 14:03:03.774301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.433 [2024-12-05 14:03:03.774307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.433 [2024-12-05 14:03:03.786405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.433 [2024-12-05 14:03:03.786728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.433 [2024-12-05 14:03:03.786745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.433 [2024-12-05 14:03:03.786752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.433 [2024-12-05 14:03:03.786912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.433 [2024-12-05 14:03:03.787072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.433 [2024-12-05 14:03:03.787081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.433 [2024-12-05 14:03:03.787087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.433 [2024-12-05 14:03:03.787094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.433 [2024-12-05 14:03:03.799233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.433 [2024-12-05 14:03:03.799658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.433 [2024-12-05 14:03:03.799703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.433 [2024-12-05 14:03:03.799727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.433 [2024-12-05 14:03:03.800312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.433 [2024-12-05 14:03:03.800516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.434 [2024-12-05 14:03:03.800527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.434 [2024-12-05 14:03:03.800533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.434 [2024-12-05 14:03:03.800539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.434 [2024-12-05 14:03:03.812093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.434 [2024-12-05 14:03:03.812494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.434 [2024-12-05 14:03:03.812511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.434 [2024-12-05 14:03:03.812519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.434 [2024-12-05 14:03:03.812679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.434 [2024-12-05 14:03:03.812839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.434 [2024-12-05 14:03:03.812848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.434 [2024-12-05 14:03:03.812855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.434 [2024-12-05 14:03:03.812862] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.434 [2024-12-05 14:03:03.824858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.434 [2024-12-05 14:03:03.825198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.434 [2024-12-05 14:03:03.825215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.434 [2024-12-05 14:03:03.825223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.434 [2024-12-05 14:03:03.825388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.434 [2024-12-05 14:03:03.825548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.434 [2024-12-05 14:03:03.825558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.434 [2024-12-05 14:03:03.825564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.434 [2024-12-05 14:03:03.825570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.434 [2024-12-05 14:03:03.837767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.434 [2024-12-05 14:03:03.838078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.434 [2024-12-05 14:03:03.838094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.434 [2024-12-05 14:03:03.838102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.434 [2024-12-05 14:03:03.838262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.434 [2024-12-05 14:03:03.838428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.434 [2024-12-05 14:03:03.838439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.434 [2024-12-05 14:03:03.838445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.434 [2024-12-05 14:03:03.838451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.434 [2024-12-05 14:03:03.850581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.434 [2024-12-05 14:03:03.851019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.434 [2024-12-05 14:03:03.851063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.434 [2024-12-05 14:03:03.851094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.434 [2024-12-05 14:03:03.851579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.434 [2024-12-05 14:03:03.851740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.434 [2024-12-05 14:03:03.851749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.434 [2024-12-05 14:03:03.851756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.434 [2024-12-05 14:03:03.851762] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.434 [2024-12-05 14:03:03.863441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.434 [2024-12-05 14:03:03.863851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.434 [2024-12-05 14:03:03.863893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.434 [2024-12-05 14:03:03.863918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.434 [2024-12-05 14:03:03.864515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.434 [2024-12-05 14:03:03.865026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.434 [2024-12-05 14:03:03.865035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.434 [2024-12-05 14:03:03.865042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.434 [2024-12-05 14:03:03.865048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.434 [2024-12-05 14:03:03.876274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.434 [2024-12-05 14:03:03.876561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.434 [2024-12-05 14:03:03.876578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.434 [2024-12-05 14:03:03.876586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.434 [2024-12-05 14:03:03.876746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.434 [2024-12-05 14:03:03.876905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.434 [2024-12-05 14:03:03.876915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.434 [2024-12-05 14:03:03.876921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.434 [2024-12-05 14:03:03.876927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.434 [2024-12-05 14:03:03.889005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.434 [2024-12-05 14:03:03.889274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.434 [2024-12-05 14:03:03.889291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.434 [2024-12-05 14:03:03.889298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.434 [2024-12-05 14:03:03.889464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.434 [2024-12-05 14:03:03.889630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.434 [2024-12-05 14:03:03.889640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.434 [2024-12-05 14:03:03.889646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.434 [2024-12-05 14:03:03.889653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.434 [2024-12-05 14:03:03.901782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.434 [2024-12-05 14:03:03.902204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.434 [2024-12-05 14:03:03.902221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.434 [2024-12-05 14:03:03.902228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.434 [2024-12-05 14:03:03.902395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.434 [2024-12-05 14:03:03.902556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.434 [2024-12-05 14:03:03.902565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.434 [2024-12-05 14:03:03.902571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.434 [2024-12-05 14:03:03.902578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.434 [2024-12-05 14:03:03.914555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.434 [2024-12-05 14:03:03.914895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.434 [2024-12-05 14:03:03.914912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.434 [2024-12-05 14:03:03.914919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.434 [2024-12-05 14:03:03.915078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.434 [2024-12-05 14:03:03.915238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.434 [2024-12-05 14:03:03.915248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.434 [2024-12-05 14:03:03.915254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.434 [2024-12-05 14:03:03.915261] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.434 [2024-12-05 14:03:03.927409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.434 [2024-12-05 14:03:03.927800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.434 [2024-12-05 14:03:03.927817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.435 [2024-12-05 14:03:03.927824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.435 [2024-12-05 14:03:03.927984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.435 [2024-12-05 14:03:03.928145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.435 [2024-12-05 14:03:03.928153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.435 [2024-12-05 14:03:03.928164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.435 [2024-12-05 14:03:03.928171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.435 [2024-12-05 14:03:03.940237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.435 [2024-12-05 14:03:03.940604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.435 [2024-12-05 14:03:03.940621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.435 [2024-12-05 14:03:03.940628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.435 [2024-12-05 14:03:03.940788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.435 [2024-12-05 14:03:03.940949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.435 [2024-12-05 14:03:03.940958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.435 [2024-12-05 14:03:03.940966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.435 [2024-12-05 14:03:03.940973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.435 [2024-12-05 14:03:03.953338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.435 [2024-12-05 14:03:03.953749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.435 [2024-12-05 14:03:03.953767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.435 [2024-12-05 14:03:03.953774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.435 [2024-12-05 14:03:03.953948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.435 [2024-12-05 14:03:03.954122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.435 [2024-12-05 14:03:03.954132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.435 [2024-12-05 14:03:03.954139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.435 [2024-12-05 14:03:03.954146] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.435 [2024-12-05 14:03:03.966185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.435 [2024-12-05 14:03:03.966515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.435 [2024-12-05 14:03:03.966532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.435 [2024-12-05 14:03:03.966540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.435 [2024-12-05 14:03:03.966701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.435 [2024-12-05 14:03:03.966861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.435 [2024-12-05 14:03:03.966870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.435 [2024-12-05 14:03:03.966876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.435 [2024-12-05 14:03:03.966882] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.435 [2024-12-05 14:03:03.979023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.435 [2024-12-05 14:03:03.979420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.435 [2024-12-05 14:03:03.979437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.435 [2024-12-05 14:03:03.979444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.435 [2024-12-05 14:03:03.979604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.435 [2024-12-05 14:03:03.979765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.435 [2024-12-05 14:03:03.979774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.435 [2024-12-05 14:03:03.979780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.435 [2024-12-05 14:03:03.979786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.435 [2024-12-05 14:03:03.991807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.435 [2024-12-05 14:03:03.992236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.435 [2024-12-05 14:03:03.992282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.435 [2024-12-05 14:03:03.992306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.435 [2024-12-05 14:03:03.992905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.435 [2024-12-05 14:03:03.993319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.435 [2024-12-05 14:03:03.993328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.435 [2024-12-05 14:03:03.993334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.435 [2024-12-05 14:03:03.993341] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.435 [2024-12-05 14:03:04.004576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.435 [2024-12-05 14:03:04.005002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.435 [2024-12-05 14:03:04.005045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.435 [2024-12-05 14:03:04.005068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.435 [2024-12-05 14:03:04.005667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.435 [2024-12-05 14:03:04.006180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.435 [2024-12-05 14:03:04.006190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.435 [2024-12-05 14:03:04.006196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.435 [2024-12-05 14:03:04.006202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.435 [2024-12-05 14:03:04.017657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.435 [2024-12-05 14:03:04.018108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.435 [2024-12-05 14:03:04.018125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.435 [2024-12-05 14:03:04.018136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.435 [2024-12-05 14:03:04.018323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.695 [2024-12-05 14:03:04.018505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.695 [2024-12-05 14:03:04.018515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.695 [2024-12-05 14:03:04.018523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.695 [2024-12-05 14:03:04.018530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.695 [2024-12-05 14:03:04.030499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.695 [2024-12-05 14:03:04.030932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.695 [2024-12-05 14:03:04.030977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.695 [2024-12-05 14:03:04.031002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.695 [2024-12-05 14:03:04.031603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.695 [2024-12-05 14:03:04.032192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.695 [2024-12-05 14:03:04.032223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.695 [2024-12-05 14:03:04.032229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.695 [2024-12-05 14:03:04.032236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.695 [2024-12-05 14:03:04.043343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.695 [2024-12-05 14:03:04.043681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.695 [2024-12-05 14:03:04.043700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.695 [2024-12-05 14:03:04.043707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.695 [2024-12-05 14:03:04.043867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.695 [2024-12-05 14:03:04.044028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.695 [2024-12-05 14:03:04.044038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.695 [2024-12-05 14:03:04.044044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.695 [2024-12-05 14:03:04.044051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.695 [2024-12-05 14:03:04.056194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.695 [2024-12-05 14:03:04.056625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.695 [2024-12-05 14:03:04.056643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.695 [2024-12-05 14:03:04.056650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.695 [2024-12-05 14:03:04.056809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.695 [2024-12-05 14:03:04.056973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.695 [2024-12-05 14:03:04.056983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.695 [2024-12-05 14:03:04.056989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.695 [2024-12-05 14:03:04.056995] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.695 [2024-12-05 14:03:04.068974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.695 [2024-12-05 14:03:04.069316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.695 [2024-12-05 14:03:04.069333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.695 [2024-12-05 14:03:04.069340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.695 [2024-12-05 14:03:04.069507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.695 [2024-12-05 14:03:04.069667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.695 [2024-12-05 14:03:04.069676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.695 [2024-12-05 14:03:04.069683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.695 [2024-12-05 14:03:04.069689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.695 [2024-12-05 14:03:04.081832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.695 [2024-12-05 14:03:04.082168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.695 [2024-12-05 14:03:04.082185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.695 [2024-12-05 14:03:04.082192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.695 [2024-12-05 14:03:04.082351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.695 [2024-12-05 14:03:04.082518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.695 [2024-12-05 14:03:04.082528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.695 [2024-12-05 14:03:04.082534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.695 [2024-12-05 14:03:04.082541] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.695 [2024-12-05 14:03:04.094613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.695 [2024-12-05 14:03:04.095000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.695 [2024-12-05 14:03:04.095018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.695 [2024-12-05 14:03:04.095026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.695 [2024-12-05 14:03:04.095184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.695 [2024-12-05 14:03:04.095345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.696 [2024-12-05 14:03:04.095354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.696 [2024-12-05 14:03:04.095364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.696 [2024-12-05 14:03:04.095379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.696 [2024-12-05 14:03:04.107376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.696 [2024-12-05 14:03:04.107785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.696 [2024-12-05 14:03:04.107801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.696 [2024-12-05 14:03:04.107809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.696 [2024-12-05 14:03:04.107968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.696 [2024-12-05 14:03:04.108128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.696 [2024-12-05 14:03:04.108137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.696 [2024-12-05 14:03:04.108143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.696 [2024-12-05 14:03:04.108150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.696 [2024-12-05 14:03:04.120166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.696 [2024-12-05 14:03:04.120594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.696 [2024-12-05 14:03:04.120640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.696 [2024-12-05 14:03:04.120665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.696 [2024-12-05 14:03:04.121250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.696 [2024-12-05 14:03:04.121725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.696 [2024-12-05 14:03:04.121735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.696 [2024-12-05 14:03:04.121741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.696 [2024-12-05 14:03:04.121747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.696 [2024-12-05 14:03:04.135473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.696 [2024-12-05 14:03:04.135964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.696 [2024-12-05 14:03:04.135987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.696 [2024-12-05 14:03:04.135997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.696 [2024-12-05 14:03:04.136253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.696 [2024-12-05 14:03:04.136519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.696 [2024-12-05 14:03:04.136533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.696 [2024-12-05 14:03:04.136544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.696 [2024-12-05 14:03:04.136554] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.696 [2024-12-05 14:03:04.148471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.696 [2024-12-05 14:03:04.148898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.696 [2024-12-05 14:03:04.148943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.696 [2024-12-05 14:03:04.148967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.696 [2024-12-05 14:03:04.149524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.696 [2024-12-05 14:03:04.149697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.696 [2024-12-05 14:03:04.149707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.696 [2024-12-05 14:03:04.149713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.696 [2024-12-05 14:03:04.149719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.696 [2024-12-05 14:03:04.161362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.696 [2024-12-05 14:03:04.161785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.696 [2024-12-05 14:03:04.161801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.696 [2024-12-05 14:03:04.161809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.696 [2024-12-05 14:03:04.161969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.696 [2024-12-05 14:03:04.162130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.696 [2024-12-05 14:03:04.162139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.696 [2024-12-05 14:03:04.162145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.696 [2024-12-05 14:03:04.162151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.696 [2024-12-05 14:03:04.174150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.696 [2024-12-05 14:03:04.174560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.696 [2024-12-05 14:03:04.174599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.696 [2024-12-05 14:03:04.174625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.696 [2024-12-05 14:03:04.175212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.696 [2024-12-05 14:03:04.175380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.696 [2024-12-05 14:03:04.175390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.696 [2024-12-05 14:03:04.175396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.696 [2024-12-05 14:03:04.175404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.696 [2024-12-05 14:03:04.187150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.696 [2024-12-05 14:03:04.187605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.696 [2024-12-05 14:03:04.187651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.696 [2024-12-05 14:03:04.187683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.696 [2024-12-05 14:03:04.188269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.696 [2024-12-05 14:03:04.188772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.696 [2024-12-05 14:03:04.188782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.696 [2024-12-05 14:03:04.188789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.696 [2024-12-05 14:03:04.188795] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.696 [2024-12-05 14:03:04.200020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.696 [2024-12-05 14:03:04.200489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.696 [2024-12-05 14:03:04.200535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.696 [2024-12-05 14:03:04.200560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.696 [2024-12-05 14:03:04.201145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.696 [2024-12-05 14:03:04.201744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.696 [2024-12-05 14:03:04.201771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.696 [2024-12-05 14:03:04.201778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.696 [2024-12-05 14:03:04.201785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.696 [2024-12-05 14:03:04.213081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.696 [2024-12-05 14:03:04.213404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.696 [2024-12-05 14:03:04.213423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.696 [2024-12-05 14:03:04.213431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.696 [2024-12-05 14:03:04.213613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.696 [2024-12-05 14:03:04.213783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.696 [2024-12-05 14:03:04.213793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.696 [2024-12-05 14:03:04.213799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.696 [2024-12-05 14:03:04.213806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.696 [2024-12-05 14:03:04.225868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.696 [2024-12-05 14:03:04.226283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.696 [2024-12-05 14:03:04.226301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.697 [2024-12-05 14:03:04.226308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.697 [2024-12-05 14:03:04.226491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.697 [2024-12-05 14:03:04.226664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.697 [2024-12-05 14:03:04.226674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.697 [2024-12-05 14:03:04.226681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.697 [2024-12-05 14:03:04.226688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.697 [2024-12-05 14:03:04.238775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.697 [2024-12-05 14:03:04.239187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.697 [2024-12-05 14:03:04.239228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.697 [2024-12-05 14:03:04.239254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.697 [2024-12-05 14:03:04.239773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.697 [2024-12-05 14:03:04.239935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.697 [2024-12-05 14:03:04.239945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.697 [2024-12-05 14:03:04.239951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.697 [2024-12-05 14:03:04.239957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.697 [2024-12-05 14:03:04.251632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.697 [2024-12-05 14:03:04.252042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.697 [2024-12-05 14:03:04.252081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.697 [2024-12-05 14:03:04.252106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.697 [2024-12-05 14:03:04.252676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.697 [2024-12-05 14:03:04.252985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.697 [2024-12-05 14:03:04.253004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.697 [2024-12-05 14:03:04.253019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.697 [2024-12-05 14:03:04.253033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.697 [2024-12-05 14:03:04.266604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.697 [2024-12-05 14:03:04.267119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.697 [2024-12-05 14:03:04.267141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.697 [2024-12-05 14:03:04.267152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.697 [2024-12-05 14:03:04.267417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.697 [2024-12-05 14:03:04.267674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.697 [2024-12-05 14:03:04.267687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.697 [2024-12-05 14:03:04.267701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.697 [2024-12-05 14:03:04.267711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.697 [2024-12-05 14:03:04.279651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.957 [2024-12-05 14:03:04.280076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-12-05 14:03:04.280093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.957 [2024-12-05 14:03:04.280101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.957 [2024-12-05 14:03:04.280275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.957 [2024-12-05 14:03:04.280456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.957 [2024-12-05 14:03:04.280467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.957 [2024-12-05 14:03:04.280474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.957 [2024-12-05 14:03:04.280481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.957 [2024-12-05 14:03:04.292507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.957 [2024-12-05 14:03:04.292917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-12-05 14:03:04.292962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.957 [2024-12-05 14:03:04.292986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.957 [2024-12-05 14:03:04.293587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.957 [2024-12-05 14:03:04.293783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.957 [2024-12-05 14:03:04.293793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.957 [2024-12-05 14:03:04.293799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.957 [2024-12-05 14:03:04.293805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.957 [2024-12-05 14:03:04.307469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.957 [2024-12-05 14:03:04.308004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-12-05 14:03:04.308048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.957 [2024-12-05 14:03:04.308073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.957 [2024-12-05 14:03:04.308662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.957 [2024-12-05 14:03:04.308920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.957 [2024-12-05 14:03:04.308933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.957 [2024-12-05 14:03:04.308943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.957 [2024-12-05 14:03:04.308953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.957 [2024-12-05 14:03:04.320381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.957 [2024-12-05 14:03:04.320782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-12-05 14:03:04.320799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.957 [2024-12-05 14:03:04.320806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.957 [2024-12-05 14:03:04.320974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.957 [2024-12-05 14:03:04.321143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.957 [2024-12-05 14:03:04.321153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.957 [2024-12-05 14:03:04.321160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.957 [2024-12-05 14:03:04.321167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.957 [2024-12-05 14:03:04.333161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.957 [2024-12-05 14:03:04.333569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-12-05 14:03:04.333587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.957 [2024-12-05 14:03:04.333594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.957 [2024-12-05 14:03:04.333754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.957 [2024-12-05 14:03:04.333913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.957 [2024-12-05 14:03:04.333923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.957 [2024-12-05 14:03:04.333929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.957 [2024-12-05 14:03:04.333935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.957 [2024-12-05 14:03:04.345999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.957 [2024-12-05 14:03:04.346413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-12-05 14:03:04.346430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.957 [2024-12-05 14:03:04.346437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.957 [2024-12-05 14:03:04.346597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.957 [2024-12-05 14:03:04.346757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.957 [2024-12-05 14:03:04.346766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.957 [2024-12-05 14:03:04.346772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.957 [2024-12-05 14:03:04.346779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.957 [2024-12-05 14:03:04.358761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.957 [2024-12-05 14:03:04.359105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-12-05 14:03:04.359122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.957 [2024-12-05 14:03:04.359132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.957 [2024-12-05 14:03:04.359291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.957 [2024-12-05 14:03:04.359459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.957 [2024-12-05 14:03:04.359470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.957 [2024-12-05 14:03:04.359477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.957 [2024-12-05 14:03:04.359483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.957 [2024-12-05 14:03:04.371613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.957 [2024-12-05 14:03:04.372041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-12-05 14:03:04.372085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.957 [2024-12-05 14:03:04.372109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.957 [2024-12-05 14:03:04.372708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.957 [2024-12-05 14:03:04.373157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.957 [2024-12-05 14:03:04.373167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.957 [2024-12-05 14:03:04.373173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.957 [2024-12-05 14:03:04.373179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.957 [2024-12-05 14:03:04.384530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.957 [2024-12-05 14:03:04.384872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-12-05 14:03:04.384888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.957 [2024-12-05 14:03:04.384896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.957 [2024-12-05 14:03:04.385056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.957 [2024-12-05 14:03:04.385217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.957 [2024-12-05 14:03:04.385226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.957 [2024-12-05 14:03:04.385233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.957 [2024-12-05 14:03:04.385239] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.957 [2024-12-05 14:03:04.397443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.957 [2024-12-05 14:03:04.397866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-12-05 14:03:04.397883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.958 [2024-12-05 14:03:04.397891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.958 [2024-12-05 14:03:04.398060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.958 [2024-12-05 14:03:04.398232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.958 [2024-12-05 14:03:04.398242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.958 [2024-12-05 14:03:04.398249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.958 [2024-12-05 14:03:04.398256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.958 [2024-12-05 14:03:04.410224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.958 [2024-12-05 14:03:04.410647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-12-05 14:03:04.410693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.958 [2024-12-05 14:03:04.410716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.958 [2024-12-05 14:03:04.411185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.958 [2024-12-05 14:03:04.411346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.958 [2024-12-05 14:03:04.411356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.958 [2024-12-05 14:03:04.411362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.958 [2024-12-05 14:03:04.411376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.958 [2024-12-05 14:03:04.423073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.958 [2024-12-05 14:03:04.423485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-12-05 14:03:04.423503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.958 [2024-12-05 14:03:04.423510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.958 [2024-12-05 14:03:04.423670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.958 [2024-12-05 14:03:04.423830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.958 [2024-12-05 14:03:04.423839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.958 [2024-12-05 14:03:04.423845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.958 [2024-12-05 14:03:04.423852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.958 [2024-12-05 14:03:04.435915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.958 [2024-12-05 14:03:04.436357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-12-05 14:03:04.436416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.958 [2024-12-05 14:03:04.436440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.958 [2024-12-05 14:03:04.436940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.958 [2024-12-05 14:03:04.437110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.958 [2024-12-05 14:03:04.437120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.958 [2024-12-05 14:03:04.437131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.958 [2024-12-05 14:03:04.437138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.958 [2024-12-05 14:03:04.448725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.958 [2024-12-05 14:03:04.449077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-12-05 14:03:04.449124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.958 [2024-12-05 14:03:04.449149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.958 [2024-12-05 14:03:04.449750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.958 [2024-12-05 14:03:04.450274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.958 [2024-12-05 14:03:04.450283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.958 [2024-12-05 14:03:04.450290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.958 [2024-12-05 14:03:04.450297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.958 [2024-12-05 14:03:04.461752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.958 [2024-12-05 14:03:04.462123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-12-05 14:03:04.462140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.958 [2024-12-05 14:03:04.462149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.958 [2024-12-05 14:03:04.462323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.958 [2024-12-05 14:03:04.462505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.958 [2024-12-05 14:03:04.462516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.958 [2024-12-05 14:03:04.462523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.958 [2024-12-05 14:03:04.462530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.958 [2024-12-05 14:03:04.474587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.958 [2024-12-05 14:03:04.475009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-12-05 14:03:04.475026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.958 [2024-12-05 14:03:04.475033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.958 [2024-12-05 14:03:04.475193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.958 [2024-12-05 14:03:04.475352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.958 [2024-12-05 14:03:04.475361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.958 [2024-12-05 14:03:04.475375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.958 [2024-12-05 14:03:04.475382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.958 [2024-12-05 14:03:04.487471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.958 [2024-12-05 14:03:04.487851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-12-05 14:03:04.487896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.958 [2024-12-05 14:03:04.487920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.958 [2024-12-05 14:03:04.488381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.958 [2024-12-05 14:03:04.488544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.958 [2024-12-05 14:03:04.488553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.958 [2024-12-05 14:03:04.488559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.958 [2024-12-05 14:03:04.488566] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.958 [2024-12-05 14:03:04.500251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.958 [2024-12-05 14:03:04.500666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-12-05 14:03:04.500684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.958 [2024-12-05 14:03:04.500692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.958 [2024-12-05 14:03:04.500852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.958 [2024-12-05 14:03:04.501013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.958 [2024-12-05 14:03:04.501022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.958 [2024-12-05 14:03:04.501029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.958 [2024-12-05 14:03:04.501036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.958 [2024-12-05 14:03:04.513125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.958 [2024-12-05 14:03:04.513579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-12-05 14:03:04.513597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.958 [2024-12-05 14:03:04.513604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.958 [2024-12-05 14:03:04.513765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.958 [2024-12-05 14:03:04.513925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.958 [2024-12-05 14:03:04.513934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.958 [2024-12-05 14:03:04.513940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.958 [2024-12-05 14:03:04.513947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.959 [2024-12-05 14:03:04.525958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.959 [2024-12-05 14:03:04.526304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-12-05 14:03:04.526321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.959 [2024-12-05 14:03:04.526334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.959 [2024-12-05 14:03:04.526501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.959 [2024-12-05 14:03:04.526662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.959 [2024-12-05 14:03:04.526672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.959 [2024-12-05 14:03:04.526679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.959 [2024-12-05 14:03:04.526685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:21.959 [2024-12-05 14:03:04.538946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:21.959 [2024-12-05 14:03:04.539303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-12-05 14:03:04.539321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:21.959 [2024-12-05 14:03:04.539329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:21.959 [2024-12-05 14:03:04.539512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:21.959 [2024-12-05 14:03:04.539688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:21.959 [2024-12-05 14:03:04.539698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:21.959 [2024-12-05 14:03:04.539705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:21.959 [2024-12-05 14:03:04.539712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.279 [2024-12-05 14:03:04.551760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.279 [2024-12-05 14:03:04.552082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-12-05 14:03:04.552100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.279 [2024-12-05 14:03:04.552108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.279 [2024-12-05 14:03:04.552277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.279 [2024-12-05 14:03:04.552455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.279 [2024-12-05 14:03:04.552466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.279 [2024-12-05 14:03:04.552472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.279 [2024-12-05 14:03:04.552479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.279 [2024-12-05 14:03:04.564491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.279 [2024-12-05 14:03:04.564898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-12-05 14:03:04.564915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.279 [2024-12-05 14:03:04.564923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.279 [2024-12-05 14:03:04.565082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.279 [2024-12-05 14:03:04.565245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.279 [2024-12-05 14:03:04.565255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.279 [2024-12-05 14:03:04.565261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.279 [2024-12-05 14:03:04.565267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.279 [2024-12-05 14:03:04.577431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.279 [2024-12-05 14:03:04.577871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-12-05 14:03:04.577887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.279 [2024-12-05 14:03:04.577895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.279 [2024-12-05 14:03:04.578054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.279 [2024-12-05 14:03:04.578214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.279 [2024-12-05 14:03:04.578225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.279 [2024-12-05 14:03:04.578231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.279 [2024-12-05 14:03:04.578238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.279 [2024-12-05 14:03:04.590244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.279 [2024-12-05 14:03:04.590668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-12-05 14:03:04.590685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.279 [2024-12-05 14:03:04.590692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.279 [2024-12-05 14:03:04.590853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.279 [2024-12-05 14:03:04.591013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.279 [2024-12-05 14:03:04.591023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.280 [2024-12-05 14:03:04.591029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.280 [2024-12-05 14:03:04.591036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.280 [2024-12-05 14:03:04.602999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.280 [2024-12-05 14:03:04.603348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-12-05 14:03:04.603365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.280 [2024-12-05 14:03:04.603379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.280 [2024-12-05 14:03:04.603539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.280 [2024-12-05 14:03:04.603700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.280 [2024-12-05 14:03:04.603711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.280 [2024-12-05 14:03:04.603717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.280 [2024-12-05 14:03:04.603727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.280 [2024-12-05 14:03:04.615917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.280 [2024-12-05 14:03:04.616338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-12-05 14:03:04.616391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.280 [2024-12-05 14:03:04.616418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.280 [2024-12-05 14:03:04.617001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.280 [2024-12-05 14:03:04.617599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.280 [2024-12-05 14:03:04.617626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.280 [2024-12-05 14:03:04.617646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.280 [2024-12-05 14:03:04.617667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.280 [2024-12-05 14:03:04.630168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.280 5726.60 IOPS, 22.37 MiB/s [2024-12-05T13:03:04.867Z] [2024-12-05 14:03:04.630593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-12-05 14:03:04.630611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.280 [2024-12-05 14:03:04.630629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.280 [2024-12-05 14:03:04.630790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.280 [2024-12-05 14:03:04.630950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.280 [2024-12-05 14:03:04.630959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.280 [2024-12-05 14:03:04.630965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.280 [2024-12-05 14:03:04.630972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.280 [2024-12-05 14:03:04.643012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.280 [2024-12-05 14:03:04.643350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-12-05 14:03:04.643398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.280 [2024-12-05 14:03:04.643426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.280 [2024-12-05 14:03:04.643951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.280 [2024-12-05 14:03:04.644116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.280 [2024-12-05 14:03:04.644126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.280 [2024-12-05 14:03:04.644140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.280 [2024-12-05 14:03:04.644148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.280 [2024-12-05 14:03:04.655904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.280 [2024-12-05 14:03:04.656373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-12-05 14:03:04.656391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.280 [2024-12-05 14:03:04.656399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.280 [2024-12-05 14:03:04.656573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.280 [2024-12-05 14:03:04.656747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.280 [2024-12-05 14:03:04.656757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.280 [2024-12-05 14:03:04.656765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.280 [2024-12-05 14:03:04.656775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.280 [2024-12-05 14:03:04.668920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.280 [2024-12-05 14:03:04.669372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-12-05 14:03:04.669391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.280 [2024-12-05 14:03:04.669398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.280 [2024-12-05 14:03:04.669568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.280 [2024-12-05 14:03:04.669737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.280 [2024-12-05 14:03:04.669747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.280 [2024-12-05 14:03:04.669754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.280 [2024-12-05 14:03:04.669761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.280 [2024-12-05 14:03:04.681813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.280 [2024-12-05 14:03:04.682222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-12-05 14:03:04.682239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.280 [2024-12-05 14:03:04.682246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.280 [2024-12-05 14:03:04.682421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.280 [2024-12-05 14:03:04.682583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.280 [2024-12-05 14:03:04.682592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.280 [2024-12-05 14:03:04.682599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.280 [2024-12-05 14:03:04.682605] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.280 [2024-12-05 14:03:04.694640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.280 [2024-12-05 14:03:04.695036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-12-05 14:03:04.695052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.280 [2024-12-05 14:03:04.695062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.280 [2024-12-05 14:03:04.695222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.280 [2024-12-05 14:03:04.695389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.280 [2024-12-05 14:03:04.695399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.280 [2024-12-05 14:03:04.695406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.280 [2024-12-05 14:03:04.695413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.280 [2024-12-05 14:03:04.707436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.280 [2024-12-05 14:03:04.707712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-12-05 14:03:04.707729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.280 [2024-12-05 14:03:04.707737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.280 [2024-12-05 14:03:04.707920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.280 [2024-12-05 14:03:04.708091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.280 [2024-12-05 14:03:04.708101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.280 [2024-12-05 14:03:04.708107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.280 [2024-12-05 14:03:04.708114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.280 [2024-12-05 14:03:04.720520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.280 [2024-12-05 14:03:04.720932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-12-05 14:03:04.720949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.281 [2024-12-05 14:03:04.720957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.281 [2024-12-05 14:03:04.721125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.281 [2024-12-05 14:03:04.721295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.281 [2024-12-05 14:03:04.721304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.281 [2024-12-05 14:03:04.721311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.281 [2024-12-05 14:03:04.721318] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.281 [2024-12-05 14:03:04.733392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.281 [2024-12-05 14:03:04.733737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-12-05 14:03:04.733755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.281 [2024-12-05 14:03:04.733763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.281 [2024-12-05 14:03:04.733922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.281 [2024-12-05 14:03:04.734086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.281 [2024-12-05 14:03:04.734096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.281 [2024-12-05 14:03:04.734102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.281 [2024-12-05 14:03:04.734108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.281 [2024-12-05 14:03:04.746202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.281 [2024-12-05 14:03:04.746523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-12-05 14:03:04.746540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.281 [2024-12-05 14:03:04.746547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.281 [2024-12-05 14:03:04.746706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.281 [2024-12-05 14:03:04.746866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.281 [2024-12-05 14:03:04.746876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.281 [2024-12-05 14:03:04.746883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.281 [2024-12-05 14:03:04.746889] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.281 [2024-12-05 14:03:04.759047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.281 [2024-12-05 14:03:04.759502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-12-05 14:03:04.759548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.281 [2024-12-05 14:03:04.759571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.281 [2024-12-05 14:03:04.759964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.281 [2024-12-05 14:03:04.760125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.281 [2024-12-05 14:03:04.760135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.281 [2024-12-05 14:03:04.760142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.281 [2024-12-05 14:03:04.760148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.281 [2024-12-05 14:03:04.771866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.281 [2024-12-05 14:03:04.772258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-12-05 14:03:04.772274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.281 [2024-12-05 14:03:04.772282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.281 [2024-12-05 14:03:04.772448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.281 [2024-12-05 14:03:04.772609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.281 [2024-12-05 14:03:04.772619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.281 [2024-12-05 14:03:04.772629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.281 [2024-12-05 14:03:04.772636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.281 [2024-12-05 14:03:04.784696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.281 [2024-12-05 14:03:04.785121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-12-05 14:03:04.785166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.281 [2024-12-05 14:03:04.785190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.281 [2024-12-05 14:03:04.785793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.281 [2024-12-05 14:03:04.785987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.281 [2024-12-05 14:03:04.785996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.281 [2024-12-05 14:03:04.786003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.281 [2024-12-05 14:03:04.786009] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.281 [2024-12-05 14:03:04.797478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.281 [2024-12-05 14:03:04.797747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-12-05 14:03:04.797765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.281 [2024-12-05 14:03:04.797773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.281 [2024-12-05 14:03:04.797932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.281 [2024-12-05 14:03:04.798092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.281 [2024-12-05 14:03:04.798102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.281 [2024-12-05 14:03:04.798109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.281 [2024-12-05 14:03:04.798115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.281 [2024-12-05 14:03:04.810286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.281 [2024-12-05 14:03:04.810619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-12-05 14:03:04.810636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.281 [2024-12-05 14:03:04.810644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.281 [2024-12-05 14:03:04.810803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.281 [2024-12-05 14:03:04.810963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.281 [2024-12-05 14:03:04.810973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.281 [2024-12-05 14:03:04.810979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.281 [2024-12-05 14:03:04.810986] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.281 [2024-12-05 14:03:04.823076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.281 [2024-12-05 14:03:04.823429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-12-05 14:03:04.823447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.281 [2024-12-05 14:03:04.823454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.281 [2024-12-05 14:03:04.823614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.281 [2024-12-05 14:03:04.823774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.281 [2024-12-05 14:03:04.823783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.281 [2024-12-05 14:03:04.823789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.281 [2024-12-05 14:03:04.823796] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.281 [2024-12-05 14:03:04.836008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.281 [2024-12-05 14:03:04.836337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-12-05 14:03:04.836354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.281 [2024-12-05 14:03:04.836362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.281 [2024-12-05 14:03:04.836538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.281 [2024-12-05 14:03:04.836706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.281 [2024-12-05 14:03:04.836717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.281 [2024-12-05 14:03:04.836724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.282 [2024-12-05 14:03:04.836731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.282 [2024-12-05 14:03:04.848845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.282 [2024-12-05 14:03:04.849277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-12-05 14:03:04.849323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.282 [2024-12-05 14:03:04.849347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.282 [2024-12-05 14:03:04.849944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.282 [2024-12-05 14:03:04.850417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.282 [2024-12-05 14:03:04.850427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.282 [2024-12-05 14:03:04.850433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.282 [2024-12-05 14:03:04.850440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.282 [2024-12-05 14:03:04.861665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.282 [2024-12-05 14:03:04.862053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-12-05 14:03:04.862070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.282 [2024-12-05 14:03:04.862080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.282 [2024-12-05 14:03:04.862240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.282 [2024-12-05 14:03:04.862424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.282 [2024-12-05 14:03:04.862435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.282 [2024-12-05 14:03:04.862442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.282 [2024-12-05 14:03:04.862449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.542 [2024-12-05 14:03:04.874488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.542 [2024-12-05 14:03:04.874842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-12-05 14:03:04.874860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.542 [2024-12-05 14:03:04.874868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.542 [2024-12-05 14:03:04.875037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.542 [2024-12-05 14:03:04.875205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.542 [2024-12-05 14:03:04.875214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.542 [2024-12-05 14:03:04.875221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.542 [2024-12-05 14:03:04.875228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.542 [2024-12-05 14:03:04.887287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.542 [2024-12-05 14:03:04.887659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-12-05 14:03:04.887679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.542 [2024-12-05 14:03:04.887687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.542 [2024-12-05 14:03:04.887856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.542 [2024-12-05 14:03:04.888025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.542 [2024-12-05 14:03:04.888035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.542 [2024-12-05 14:03:04.888041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.542 [2024-12-05 14:03:04.888048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.542 [2024-12-05 14:03:04.900056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.542 [2024-12-05 14:03:04.900406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-12-05 14:03:04.900424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.542 [2024-12-05 14:03:04.900431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.542 [2024-12-05 14:03:04.900591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.542 [2024-12-05 14:03:04.900754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.542 [2024-12-05 14:03:04.900764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.542 [2024-12-05 14:03:04.900770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.542 [2024-12-05 14:03:04.900777] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.542 [2024-12-05 14:03:04.912917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.542 [2024-12-05 14:03:04.913304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-12-05 14:03:04.913321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.542 [2024-12-05 14:03:04.913329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.542 [2024-12-05 14:03:04.913492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.542 [2024-12-05 14:03:04.913654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.542 [2024-12-05 14:03:04.913663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.542 [2024-12-05 14:03:04.913669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.542 [2024-12-05 14:03:04.913677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.542 [2024-12-05 14:03:04.925681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.542 [2024-12-05 14:03:04.926055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-12-05 14:03:04.926072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.542 [2024-12-05 14:03:04.926080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.542 [2024-12-05 14:03:04.926239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.542 [2024-12-05 14:03:04.926407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.542 [2024-12-05 14:03:04.926417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.542 [2024-12-05 14:03:04.926423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.542 [2024-12-05 14:03:04.926430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.542 [2024-12-05 14:03:04.938610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.542 [2024-12-05 14:03:04.938970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-12-05 14:03:04.938986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.542 [2024-12-05 14:03:04.938994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.542 [2024-12-05 14:03:04.939153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.542 [2024-12-05 14:03:04.939312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.542 [2024-12-05 14:03:04.939322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.542 [2024-12-05 14:03:04.939331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.542 [2024-12-05 14:03:04.939338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.542 [2024-12-05 14:03:04.951479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.542 [2024-12-05 14:03:04.951823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-12-05 14:03:04.951839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.542 [2024-12-05 14:03:04.951846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.542 [2024-12-05 14:03:04.952006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.542 [2024-12-05 14:03:04.952167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.542 [2024-12-05 14:03:04.952176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.542 [2024-12-05 14:03:04.952183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.542 [2024-12-05 14:03:04.952189] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.542 [2024-12-05 14:03:04.964340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.542 [2024-12-05 14:03:04.964618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.542 [2024-12-05 14:03:04.964635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.542 [2024-12-05 14:03:04.964643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.542 [2024-12-05 14:03:04.964802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.542 [2024-12-05 14:03:04.964963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.542 [2024-12-05 14:03:04.964973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.542 [2024-12-05 14:03:04.964980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.542 [2024-12-05 14:03:04.964987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.542 [2024-12-05 14:03:04.977442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.543 [2024-12-05 14:03:04.977799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-12-05 14:03:04.977817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.543 [2024-12-05 14:03:04.977825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.543 [2024-12-05 14:03:04.977999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.543 [2024-12-05 14:03:04.978181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.543 [2024-12-05 14:03:04.978191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.543 [2024-12-05 14:03:04.978197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.543 [2024-12-05 14:03:04.978203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.543 [2024-12-05 14:03:04.990335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.543 [2024-12-05 14:03:04.990713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-12-05 14:03:04.990729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.543 [2024-12-05 14:03:04.990737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.543 [2024-12-05 14:03:04.990896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.543 [2024-12-05 14:03:04.991057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.543 [2024-12-05 14:03:04.991067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.543 [2024-12-05 14:03:04.991073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.543 [2024-12-05 14:03:04.991079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.543 [2024-12-05 14:03:05.003169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.543 [2024-12-05 14:03:05.003483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-12-05 14:03:05.003501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.543 [2024-12-05 14:03:05.003509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.543 [2024-12-05 14:03:05.003669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.543 [2024-12-05 14:03:05.003829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.543 [2024-12-05 14:03:05.003838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.543 [2024-12-05 14:03:05.003844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.543 [2024-12-05 14:03:05.003850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.543 [2024-12-05 14:03:05.015938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.543 [2024-12-05 14:03:05.016285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-12-05 14:03:05.016302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.543 [2024-12-05 14:03:05.016310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.543 [2024-12-05 14:03:05.016474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.543 [2024-12-05 14:03:05.016635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.543 [2024-12-05 14:03:05.016645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.543 [2024-12-05 14:03:05.016651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.543 [2024-12-05 14:03:05.016659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.543 [2024-12-05 14:03:05.028813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.543 [2024-12-05 14:03:05.029168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-12-05 14:03:05.029187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.543 [2024-12-05 14:03:05.029197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.543 [2024-12-05 14:03:05.029357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.543 [2024-12-05 14:03:05.029522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.543 [2024-12-05 14:03:05.029533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.543 [2024-12-05 14:03:05.029540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.543 [2024-12-05 14:03:05.029547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.543 [2024-12-05 14:03:05.041603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.543 [2024-12-05 14:03:05.041913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-12-05 14:03:05.041930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.543 [2024-12-05 14:03:05.041938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.543 [2024-12-05 14:03:05.042097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.543 [2024-12-05 14:03:05.042257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.543 [2024-12-05 14:03:05.042267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.543 [2024-12-05 14:03:05.042273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.543 [2024-12-05 14:03:05.042280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.543 [2024-12-05 14:03:05.054426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.543 [2024-12-05 14:03:05.054818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-12-05 14:03:05.054836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.543 [2024-12-05 14:03:05.054843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.543 [2024-12-05 14:03:05.055004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.543 [2024-12-05 14:03:05.055164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.543 [2024-12-05 14:03:05.055174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.543 [2024-12-05 14:03:05.055180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.543 [2024-12-05 14:03:05.055186] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.543 [2024-12-05 14:03:05.067175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.543 [2024-12-05 14:03:05.067461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-12-05 14:03:05.067478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.543 [2024-12-05 14:03:05.067485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.543 [2024-12-05 14:03:05.067644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.543 [2024-12-05 14:03:05.067807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.543 [2024-12-05 14:03:05.067818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.543 [2024-12-05 14:03:05.067826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.543 [2024-12-05 14:03:05.067834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.543 [2024-12-05 14:03:05.079993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.543 [2024-12-05 14:03:05.080408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-12-05 14:03:05.080426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.543 [2024-12-05 14:03:05.080434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.543 [2024-12-05 14:03:05.080594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.543 [2024-12-05 14:03:05.080754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.543 [2024-12-05 14:03:05.080763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.543 [2024-12-05 14:03:05.080769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.543 [2024-12-05 14:03:05.080776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.543 [2024-12-05 14:03:05.092733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.543 [2024-12-05 14:03:05.093083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.543 [2024-12-05 14:03:05.093100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.543 [2024-12-05 14:03:05.093107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.543 [2024-12-05 14:03:05.093266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.543 [2024-12-05 14:03:05.093431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.544 [2024-12-05 14:03:05.093441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.544 [2024-12-05 14:03:05.093447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.544 [2024-12-05 14:03:05.093453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.544 [2024-12-05 14:03:05.105549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.544 [2024-12-05 14:03:05.105959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-12-05 14:03:05.106003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.544 [2024-12-05 14:03:05.106027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.544 [2024-12-05 14:03:05.106624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.544 [2024-12-05 14:03:05.106975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.544 [2024-12-05 14:03:05.106984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.544 [2024-12-05 14:03:05.106994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.544 [2024-12-05 14:03:05.107002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.544 [2024-12-05 14:03:05.118410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.544 [2024-12-05 14:03:05.118766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.544 [2024-12-05 14:03:05.118783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.544 [2024-12-05 14:03:05.118790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.544 [2024-12-05 14:03:05.118949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.544 [2024-12-05 14:03:05.119109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.544 [2024-12-05 14:03:05.119118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.544 [2024-12-05 14:03:05.119125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.544 [2024-12-05 14:03:05.119131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.804 [2024-12-05 14:03:05.131401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.804 [2024-12-05 14:03:05.131742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.804 [2024-12-05 14:03:05.131760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.804 [2024-12-05 14:03:05.131768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.804 [2024-12-05 14:03:05.131941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.804 [2024-12-05 14:03:05.132115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.804 [2024-12-05 14:03:05.132125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.804 [2024-12-05 14:03:05.132133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.804 [2024-12-05 14:03:05.132141] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 820415 Killed "${NVMF_APP[@]}" "$@" 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:22.804 [2024-12-05 14:03:05.144377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.804 [2024-12-05 14:03:05.144733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.804 [2024-12-05 14:03:05.144752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.804 [2024-12-05 14:03:05.144761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.804 [2024-12-05 14:03:05.144945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.804 [2024-12-05 14:03:05.145115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.804 [2024-12-05 14:03:05.145131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.804 [2024-12-05 14:03:05.145137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.804 [2024-12-05 14:03:05.145144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=821946 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 821946 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 821946 ']' 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:22.804 14:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:22.804 [2024-12-05 14:03:05.157349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.804 [2024-12-05 14:03:05.157743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.804 [2024-12-05 14:03:05.157762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.804 [2024-12-05 14:03:05.157770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.804 [2024-12-05 14:03:05.157944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.804 [2024-12-05 14:03:05.158120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.804 [2024-12-05 14:03:05.158130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.804 [2024-12-05 14:03:05.158138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.804 [2024-12-05 14:03:05.158146] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.804 [2024-12-05 14:03:05.170409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.804 [2024-12-05 14:03:05.170818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.804 [2024-12-05 14:03:05.170835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.804 [2024-12-05 14:03:05.170843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.804 [2024-12-05 14:03:05.171016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.804 [2024-12-05 14:03:05.171191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.804 [2024-12-05 14:03:05.171201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.804 [2024-12-05 14:03:05.171208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.804 [2024-12-05 14:03:05.171215] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.804 [2024-12-05 14:03:05.183453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.804 [2024-12-05 14:03:05.183784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.804 [2024-12-05 14:03:05.183801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.804 [2024-12-05 14:03:05.183809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.804 [2024-12-05 14:03:05.183977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.804 [2024-12-05 14:03:05.184146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.804 [2024-12-05 14:03:05.184155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.804 [2024-12-05 14:03:05.184162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.804 [2024-12-05 14:03:05.184169] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.804 [2024-12-05 14:03:05.196370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.804 [2024-12-05 14:03:05.196797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.804 [2024-12-05 14:03:05.196814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.804 [2024-12-05 14:03:05.196822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.804 [2024-12-05 14:03:05.196960] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:31:22.804 [2024-12-05 14:03:05.196992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.804 [2024-12-05 14:03:05.196999] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.804 [2024-12-05 14:03:05.197161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.804 [2024-12-05 14:03:05.197169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.804 [2024-12-05 14:03:05.197176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.804 [2024-12-05 14:03:05.197182] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.804 [2024-12-05 14:03:05.209317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.804 [2024-12-05 14:03:05.209748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.804 [2024-12-05 14:03:05.209766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.804 [2024-12-05 14:03:05.209774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.804 [2024-12-05 14:03:05.209944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.804 [2024-12-05 14:03:05.210114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.804 [2024-12-05 14:03:05.210123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.804 [2024-12-05 14:03:05.210130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.804 [2024-12-05 14:03:05.210141] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.804 [2024-12-05 14:03:05.222316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.804 [2024-12-05 14:03:05.222658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.804 [2024-12-05 14:03:05.222676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.804 [2024-12-05 14:03:05.222684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.804 [2024-12-05 14:03:05.222853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.804 [2024-12-05 14:03:05.223022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.804 [2024-12-05 14:03:05.223032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.804 [2024-12-05 14:03:05.223039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.804 [2024-12-05 14:03:05.223047] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.805 [2024-12-05 14:03:05.235308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.805 [2024-12-05 14:03:05.235680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.805 [2024-12-05 14:03:05.235698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.805 [2024-12-05 14:03:05.235706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.805 [2024-12-05 14:03:05.235879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.805 [2024-12-05 14:03:05.236055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.805 [2024-12-05 14:03:05.236066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.805 [2024-12-05 14:03:05.236073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.805 [2024-12-05 14:03:05.236081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.805 [2024-12-05 14:03:05.248343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.805 [2024-12-05 14:03:05.248682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.805 [2024-12-05 14:03:05.248700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.805 [2024-12-05 14:03:05.248708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.805 [2024-12-05 14:03:05.248878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.805 [2024-12-05 14:03:05.249047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.805 [2024-12-05 14:03:05.249056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.805 [2024-12-05 14:03:05.249063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.805 [2024-12-05 14:03:05.249070] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.805 [2024-12-05 14:03:05.261317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.805 [2024-12-05 14:03:05.261708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.805 [2024-12-05 14:03:05.261729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.805 [2024-12-05 14:03:05.261737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.805 [2024-12-05 14:03:05.261906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.805 [2024-12-05 14:03:05.262076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.805 [2024-12-05 14:03:05.262085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.805 [2024-12-05 14:03:05.262091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.805 [2024-12-05 14:03:05.262098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.805 [2024-12-05 14:03:05.274290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.805 [2024-12-05 14:03:05.274704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.805 [2024-12-05 14:03:05.274722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.805 [2024-12-05 14:03:05.274729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.805 [2024-12-05 14:03:05.274899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.805 [2024-12-05 14:03:05.275068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.805 [2024-12-05 14:03:05.275078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.805 [2024-12-05 14:03:05.275084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.805 [2024-12-05 14:03:05.275091] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.805 [2024-12-05 14:03:05.276867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:22.805 [2024-12-05 14:03:05.287318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.805 [2024-12-05 14:03:05.287788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.805 [2024-12-05 14:03:05.287810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.805 [2024-12-05 14:03:05.287819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.805 [2024-12-05 14:03:05.287991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.805 [2024-12-05 14:03:05.288162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.805 [2024-12-05 14:03:05.288173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.805 [2024-12-05 14:03:05.288181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.805 [2024-12-05 14:03:05.288207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.805 [2024-12-05 14:03:05.300357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.805 [2024-12-05 14:03:05.300786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.805 [2024-12-05 14:03:05.300804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.805 [2024-12-05 14:03:05.300816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.805 [2024-12-05 14:03:05.300985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.805 [2024-12-05 14:03:05.301156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.805 [2024-12-05 14:03:05.301166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.805 [2024-12-05 14:03:05.301173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.805 [2024-12-05 14:03:05.301180] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.805 [2024-12-05 14:03:05.313397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.805 [2024-12-05 14:03:05.313717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.805 [2024-12-05 14:03:05.313735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.805 [2024-12-05 14:03:05.313742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.805 [2024-12-05 14:03:05.313911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.805 [2024-12-05 14:03:05.314079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.805 [2024-12-05 14:03:05.314088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.805 [2024-12-05 14:03:05.314095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.805 [2024-12-05 14:03:05.314102] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.805 [2024-12-05 14:03:05.318584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.805 [2024-12-05 14:03:05.318612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.805 [2024-12-05 14:03:05.318619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.805 [2024-12-05 14:03:05.318625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.805 [2024-12-05 14:03:05.318631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.805 [2024-12-05 14:03:05.319992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.805 [2024-12-05 14:03:05.320102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.805 [2024-12-05 14:03:05.320103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:22.805 [2024-12-05 14:03:05.326529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.805 [2024-12-05 14:03:05.326953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.805 [2024-12-05 14:03:05.326974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.805 [2024-12-05 14:03:05.326984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.805 [2024-12-05 14:03:05.327160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.805 [2024-12-05 14:03:05.327337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.805 [2024-12-05 14:03:05.327347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.805 [2024-12-05 14:03:05.327355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.805 [2024-12-05 14:03:05.327374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.805 [2024-12-05 14:03:05.339617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.805 [2024-12-05 14:03:05.340096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.805 [2024-12-05 14:03:05.340118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.805 [2024-12-05 14:03:05.340127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.805 [2024-12-05 14:03:05.340302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.805 [2024-12-05 14:03:05.340486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.805 [2024-12-05 14:03:05.340497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.805 [2024-12-05 14:03:05.340505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.805 [2024-12-05 14:03:05.340512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.806 [2024-12-05 14:03:05.352750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.806 [2024-12-05 14:03:05.353102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.806 [2024-12-05 14:03:05.353123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.806 [2024-12-05 14:03:05.353133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.806 [2024-12-05 14:03:05.353309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.806 [2024-12-05 14:03:05.353491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.806 [2024-12-05 14:03:05.353502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.806 [2024-12-05 14:03:05.353509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.806 [2024-12-05 14:03:05.353518] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.806 [2024-12-05 14:03:05.365748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.806 [2024-12-05 14:03:05.366203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.806 [2024-12-05 14:03:05.366225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.806 [2024-12-05 14:03:05.366235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.806 [2024-12-05 14:03:05.366415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.806 [2024-12-05 14:03:05.366593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.806 [2024-12-05 14:03:05.366603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.806 [2024-12-05 14:03:05.366611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.806 [2024-12-05 14:03:05.366618] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:22.806 [2024-12-05 14:03:05.378852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:22.806 [2024-12-05 14:03:05.379310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.806 [2024-12-05 14:03:05.379330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:22.806 [2024-12-05 14:03:05.379339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:22.806 [2024-12-05 14:03:05.379519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:22.806 [2024-12-05 14:03:05.379697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:22.806 [2024-12-05 14:03:05.379707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:22.806 [2024-12-05 14:03:05.379714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:22.806 [2024-12-05 14:03:05.379722] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.064 [2024-12-05 14:03:05.391962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.064 [2024-12-05 14:03:05.392326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.064 [2024-12-05 14:03:05.392344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.064 [2024-12-05 14:03:05.392352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.064 [2024-12-05 14:03:05.392532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.064 [2024-12-05 14:03:05.392707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.392717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.392724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.392731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.404960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.405403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.065 [2024-12-05 14:03:05.405421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.065 [2024-12-05 14:03:05.405429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.065 [2024-12-05 14:03:05.405604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.065 [2024-12-05 14:03:05.405780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.405789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.405796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.405804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.418039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.418403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.065 [2024-12-05 14:03:05.418423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.065 [2024-12-05 14:03:05.418431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.065 [2024-12-05 14:03:05.418609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.065 [2024-12-05 14:03:05.418784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.418794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.418801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.418808] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.431047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.431449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.065 [2024-12-05 14:03:05.431468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.065 [2024-12-05 14:03:05.431476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.065 [2024-12-05 14:03:05.431649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.065 [2024-12-05 14:03:05.431824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.431834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.431841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.431848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.444122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.444562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.065 [2024-12-05 14:03:05.444582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.065 [2024-12-05 14:03:05.444590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.065 [2024-12-05 14:03:05.444765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.065 [2024-12-05 14:03:05.444941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.444951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.444957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.444965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.457218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.457687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.065 [2024-12-05 14:03:05.457707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.065 [2024-12-05 14:03:05.457715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.065 [2024-12-05 14:03:05.457889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.065 [2024-12-05 14:03:05.458065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.458078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.458086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.458093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.470206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.470560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.065 [2024-12-05 14:03:05.470578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.065 [2024-12-05 14:03:05.470586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.065 [2024-12-05 14:03:05.470760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.065 [2024-12-05 14:03:05.470937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.470946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.470953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.470960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.483183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.483648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.065 [2024-12-05 14:03:05.483666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.065 [2024-12-05 14:03:05.483674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.065 [2024-12-05 14:03:05.483848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.065 [2024-12-05 14:03:05.484025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.484037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.484044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.484051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.496300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.496716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.065 [2024-12-05 14:03:05.496734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.065 [2024-12-05 14:03:05.496742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.065 [2024-12-05 14:03:05.496915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.065 [2024-12-05 14:03:05.497090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.497100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.497107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.497119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.509342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.509635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.065 [2024-12-05 14:03:05.509653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.065 [2024-12-05 14:03:05.509661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.065 [2024-12-05 14:03:05.509835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.065 [2024-12-05 14:03:05.510009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.510018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.510025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.510032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.522437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.522850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.065 [2024-12-05 14:03:05.522868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.065 [2024-12-05 14:03:05.522876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.065 [2024-12-05 14:03:05.523050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.065 [2024-12-05 14:03:05.523226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.523236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.523243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.523250] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.535486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.535895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.065 [2024-12-05 14:03:05.535913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.065 [2024-12-05 14:03:05.535920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.065 [2024-12-05 14:03:05.536094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.065 [2024-12-05 14:03:05.536268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.536278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.536285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.536292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.548533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.548874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.065 [2024-12-05 14:03:05.548890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.065 [2024-12-05 14:03:05.548898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.065 [2024-12-05 14:03:05.549072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.065 [2024-12-05 14:03:05.549247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.549257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.549266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.549273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.561656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.562089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.065 [2024-12-05 14:03:05.562106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.065 [2024-12-05 14:03:05.562114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.065 [2024-12-05 14:03:05.562287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.065 [2024-12-05 14:03:05.562467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.065 [2024-12-05 14:03:05.562478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.065 [2024-12-05 14:03:05.562485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.065 [2024-12-05 14:03:05.562492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.065 [2024-12-05 14:03:05.574708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.065 [2024-12-05 14:03:05.575140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.066 [2024-12-05 14:03:05.575158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.066 [2024-12-05 14:03:05.575166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.066 [2024-12-05 14:03:05.575339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.066 [2024-12-05 14:03:05.575517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.066 [2024-12-05 14:03:05.575528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.066 [2024-12-05 14:03:05.575535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.066 [2024-12-05 14:03:05.575541] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.066 [2024-12-05 14:03:05.587776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.066 [2024-12-05 14:03:05.588132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.066 [2024-12-05 14:03:05.588150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.066 [2024-12-05 14:03:05.588158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.066 [2024-12-05 14:03:05.588335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.066 [2024-12-05 14:03:05.588516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.066 [2024-12-05 14:03:05.588526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.066 [2024-12-05 14:03:05.588533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.066 [2024-12-05 14:03:05.588540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.066 [2024-12-05 14:03:05.600760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.066 [2024-12-05 14:03:05.601199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.066 [2024-12-05 14:03:05.601216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.066 [2024-12-05 14:03:05.601224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.066 [2024-12-05 14:03:05.601403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.066 [2024-12-05 14:03:05.601578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.066 [2024-12-05 14:03:05.601588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.066 [2024-12-05 14:03:05.601595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.066 [2024-12-05 14:03:05.601601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.066 [2024-12-05 14:03:05.613823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.066 [2024-12-05 14:03:05.614251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.066 [2024-12-05 14:03:05.614269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.066 [2024-12-05 14:03:05.614276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.066 [2024-12-05 14:03:05.614454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.066 [2024-12-05 14:03:05.614628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.066 [2024-12-05 14:03:05.614638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.066 [2024-12-05 14:03:05.614645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.066 [2024-12-05 14:03:05.614652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.066 [2024-12-05 14:03:05.626878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.066 [2024-12-05 14:03:05.627241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.066 [2024-12-05 14:03:05.627259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.066 [2024-12-05 14:03:05.627267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.066 [2024-12-05 14:03:05.627444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.066 [2024-12-05 14:03:05.627619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.066 [2024-12-05 14:03:05.627632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.066 [2024-12-05 14:03:05.627639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.066 [2024-12-05 14:03:05.627646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.066 4772.17 IOPS, 18.64 MiB/s [2024-12-05T13:03:05.653Z] [2024-12-05 14:03:05.639992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.066 [2024-12-05 14:03:05.640338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.066 [2024-12-05 14:03:05.640357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.066 [2024-12-05 14:03:05.640365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.066 [2024-12-05 14:03:05.640544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.066 [2024-12-05 14:03:05.640719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.066 [2024-12-05 14:03:05.640729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.066 [2024-12-05 14:03:05.640736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.066 [2024-12-05 14:03:05.640743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.325 [2024-12-05 14:03:05.652968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.325 [2024-12-05 14:03:05.653402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.325 [2024-12-05 14:03:05.653421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.325 [2024-12-05 14:03:05.653428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.325 [2024-12-05 14:03:05.653601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.325 [2024-12-05 14:03:05.653777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.325 [2024-12-05 14:03:05.653786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.325 [2024-12-05 14:03:05.653793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.325 [2024-12-05 14:03:05.653800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.325 [2024-12-05 14:03:05.666037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.325 [2024-12-05 14:03:05.666363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.325 [2024-12-05 14:03:05.666385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.325 [2024-12-05 14:03:05.666393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.325 [2024-12-05 14:03:05.666568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.325 [2024-12-05 14:03:05.666743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.325 [2024-12-05 14:03:05.666753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.325 [2024-12-05 14:03:05.666760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.325 [2024-12-05 14:03:05.666770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.325 [2024-12-05 14:03:05.679034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.325 [2024-12-05 14:03:05.679323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.325 [2024-12-05 14:03:05.679341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.325 [2024-12-05 14:03:05.679348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.325 [2024-12-05 14:03:05.679526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.325 [2024-12-05 14:03:05.679701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.325 [2024-12-05 14:03:05.679712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.325 [2024-12-05 14:03:05.679720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.325 [2024-12-05 14:03:05.679727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.325 [2024-12-05 14:03:05.692127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.325 [2024-12-05 14:03:05.692475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.325 [2024-12-05 14:03:05.692493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.325 [2024-12-05 14:03:05.692501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.325 [2024-12-05 14:03:05.692675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.325 [2024-12-05 14:03:05.692849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.325 [2024-12-05 14:03:05.692858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.325 [2024-12-05 14:03:05.692865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.326 [2024-12-05 14:03:05.692872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.326 [2024-12-05 14:03:05.705109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.326 [2024-12-05 14:03:05.705536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.326 [2024-12-05 14:03:05.705555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.326 [2024-12-05 14:03:05.705563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.326 [2024-12-05 14:03:05.705737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.326 [2024-12-05 14:03:05.705912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.326 [2024-12-05 14:03:05.705922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.326 [2024-12-05 14:03:05.705929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.326 [2024-12-05 14:03:05.705936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.326 [2024-12-05 14:03:05.718155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.326 [2024-12-05 14:03:05.718611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.326 [2024-12-05 14:03:05.718629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.326 [2024-12-05 14:03:05.718637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.326 [2024-12-05 14:03:05.718811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.326 [2024-12-05 14:03:05.718985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.326 [2024-12-05 14:03:05.718995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.326 [2024-12-05 14:03:05.719002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.326 [2024-12-05 14:03:05.719009] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.326 [2024-12-05 14:03:05.731233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.326 [2024-12-05 14:03:05.731643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.326 [2024-12-05 14:03:05.731662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.326 [2024-12-05 14:03:05.731670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.326 [2024-12-05 14:03:05.731843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.326 [2024-12-05 14:03:05.732017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.326 [2024-12-05 14:03:05.732027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.326 [2024-12-05 14:03:05.732034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.326 [2024-12-05 14:03:05.732041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.326 [2024-12-05 14:03:05.744266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.326 [2024-12-05 14:03:05.744617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.326 [2024-12-05 14:03:05.744635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.326 [2024-12-05 14:03:05.744642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.326 [2024-12-05 14:03:05.744815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.326 [2024-12-05 14:03:05.744989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.326 [2024-12-05 14:03:05.744999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.326 [2024-12-05 14:03:05.745006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.326 [2024-12-05 14:03:05.745013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.326 [2024-12-05 14:03:05.757232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.326 [2024-12-05 14:03:05.757641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.326 [2024-12-05 14:03:05.757659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.326 [2024-12-05 14:03:05.757667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.326 [2024-12-05 14:03:05.757847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.326 [2024-12-05 14:03:05.758021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.326 [2024-12-05 14:03:05.758031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.326 [2024-12-05 14:03:05.758037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.326 [2024-12-05 14:03:05.758044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.326 [2024-12-05 14:03:05.770257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.326 [2024-12-05 14:03:05.770619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.326 [2024-12-05 14:03:05.770637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.326 [2024-12-05 14:03:05.770646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.326 [2024-12-05 14:03:05.770820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.326 [2024-12-05 14:03:05.770993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.326 [2024-12-05 14:03:05.771003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.326 [2024-12-05 14:03:05.771010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.326 [2024-12-05 14:03:05.771017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.326 [2024-12-05 14:03:05.783245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.326 [2024-12-05 14:03:05.783662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.326 [2024-12-05 14:03:05.783680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.326 [2024-12-05 14:03:05.783688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.326 [2024-12-05 14:03:05.783862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.326 [2024-12-05 14:03:05.784037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.326 [2024-12-05 14:03:05.784047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.326 [2024-12-05 14:03:05.784053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.326 [2024-12-05 14:03:05.784060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.326 [2024-12-05 14:03:05.796302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.326 [2024-12-05 14:03:05.796742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.326 [2024-12-05 14:03:05.796760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.326 [2024-12-05 14:03:05.796767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.326 [2024-12-05 14:03:05.796942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.326 [2024-12-05 14:03:05.797118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.326 [2024-12-05 14:03:05.797130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.326 [2024-12-05 14:03:05.797137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.326 [2024-12-05 14:03:05.797146] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.326 [2024-12-05 14:03:05.809378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.326 [2024-12-05 14:03:05.809811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.326 [2024-12-05 14:03:05.809829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.326 [2024-12-05 14:03:05.809837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.326 [2024-12-05 14:03:05.810010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.326 [2024-12-05 14:03:05.810184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.326 [2024-12-05 14:03:05.810194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.326 [2024-12-05 14:03:05.810201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.326 [2024-12-05 14:03:05.810208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.326 [2024-12-05 14:03:05.822426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.326 [2024-12-05 14:03:05.822861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.326 [2024-12-05 14:03:05.822879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.326 [2024-12-05 14:03:05.822887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.326 [2024-12-05 14:03:05.823061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.326 [2024-12-05 14:03:05.823237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.326 [2024-12-05 14:03:05.823247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.327 [2024-12-05 14:03:05.823253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.327 [2024-12-05 14:03:05.823260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.327 [2024-12-05 14:03:05.835507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.327 [2024-12-05 14:03:05.835872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.327 [2024-12-05 14:03:05.835889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.327 [2024-12-05 14:03:05.835898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.327 [2024-12-05 14:03:05.836071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.327 [2024-12-05 14:03:05.836248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.327 [2024-12-05 14:03:05.836258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.327 [2024-12-05 14:03:05.836265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.327 [2024-12-05 14:03:05.836276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.327 [2024-12-05 14:03:05.848516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.327 [2024-12-05 14:03:05.848886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.327 [2024-12-05 14:03:05.848903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.327 [2024-12-05 14:03:05.848910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.327 [2024-12-05 14:03:05.849083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.327 [2024-12-05 14:03:05.849257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.327 [2024-12-05 14:03:05.849266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.327 [2024-12-05 14:03:05.849273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.327 [2024-12-05 14:03:05.849280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.327 [2024-12-05 14:03:05.861512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.327 [2024-12-05 14:03:05.861865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.327 [2024-12-05 14:03:05.861883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.327 [2024-12-05 14:03:05.861892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.327 [2024-12-05 14:03:05.862065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.327 [2024-12-05 14:03:05.862240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.327 [2024-12-05 14:03:05.862250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.327 [2024-12-05 14:03:05.862257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.327 [2024-12-05 14:03:05.862264] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.327 [2024-12-05 14:03:05.874478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.327 [2024-12-05 14:03:05.874909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.327 [2024-12-05 14:03:05.874927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.327 [2024-12-05 14:03:05.874934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.327 [2024-12-05 14:03:05.875109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.327 [2024-12-05 14:03:05.875284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.327 [2024-12-05 14:03:05.875294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.327 [2024-12-05 14:03:05.875300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.327 [2024-12-05 14:03:05.875308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.327 [2024-12-05 14:03:05.887575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.327 [2024-12-05 14:03:05.888014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.327 [2024-12-05 14:03:05.888031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.327 [2024-12-05 14:03:05.888039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.327 [2024-12-05 14:03:05.888213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.327 [2024-12-05 14:03:05.888391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.327 [2024-12-05 14:03:05.888403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.327 [2024-12-05 14:03:05.888410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.327 [2024-12-05 14:03:05.888417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.327 [2024-12-05 14:03:05.900637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.327 [2024-12-05 14:03:05.901025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.327 [2024-12-05 14:03:05.901043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.327 [2024-12-05 14:03:05.901050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.327 [2024-12-05 14:03:05.901224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.327 [2024-12-05 14:03:05.901403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.327 [2024-12-05 14:03:05.901414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.327 [2024-12-05 14:03:05.901420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.327 [2024-12-05 14:03:05.901428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.587 [2024-12-05 14:03:05.913663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.587 [2024-12-05 14:03:05.914090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-12-05 14:03:05.914108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.587 [2024-12-05 14:03:05.914116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.587 [2024-12-05 14:03:05.914290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.587 [2024-12-05 14:03:05.914473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.587 [2024-12-05 14:03:05.914484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.587 [2024-12-05 14:03:05.914490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.587 [2024-12-05 14:03:05.914498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.587 [2024-12-05 14:03:05.926736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.587 [2024-12-05 14:03:05.927140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-12-05 14:03:05.927158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.587 [2024-12-05 14:03:05.927165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.587 [2024-12-05 14:03:05.927342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.587 [2024-12-05 14:03:05.927520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.587 [2024-12-05 14:03:05.927530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.587 [2024-12-05 14:03:05.927537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.587 [2024-12-05 14:03:05.927544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.587 [2024-12-05 14:03:05.939777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.587 [2024-12-05 14:03:05.940135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-12-05 14:03:05.940152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.587 [2024-12-05 14:03:05.940161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.587 [2024-12-05 14:03:05.940335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.587 [2024-12-05 14:03:05.940515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.587 [2024-12-05 14:03:05.940526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.587 [2024-12-05 14:03:05.940533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.587 [2024-12-05 14:03:05.940540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.587 [2024-12-05 14:03:05.952762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.587 [2024-12-05 14:03:05.953127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-12-05 14:03:05.953145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.587 [2024-12-05 14:03:05.953152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.587 [2024-12-05 14:03:05.953327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.587 [2024-12-05 14:03:05.953506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.587 [2024-12-05 14:03:05.953518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.587 [2024-12-05 14:03:05.953525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.587 [2024-12-05 14:03:05.953532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.587 [2024-12-05 14:03:05.965759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.587 [2024-12-05 14:03:05.966164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-12-05 14:03:05.966183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.587 [2024-12-05 14:03:05.966191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.587 [2024-12-05 14:03:05.966365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.587 [2024-12-05 14:03:05.966546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.587 [2024-12-05 14:03:05.966560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.587 [2024-12-05 14:03:05.966567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.587 [2024-12-05 14:03:05.966573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.587 [2024-12-05 14:03:05.978807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.587 [2024-12-05 14:03:05.979215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-12-05 14:03:05.979233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.587 [2024-12-05 14:03:05.979242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.587 [2024-12-05 14:03:05.979422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.587 [2024-12-05 14:03:05.979598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.587 [2024-12-05 14:03:05.979608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.587 [2024-12-05 14:03:05.979617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.587 [2024-12-05 14:03:05.979625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.587 [2024-12-05 14:03:05.991869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.587 [2024-12-05 14:03:05.992303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-12-05 14:03:05.992321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.587 [2024-12-05 14:03:05.992329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.587 [2024-12-05 14:03:05.992507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.587 [2024-12-05 14:03:05.992683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.587 [2024-12-05 14:03:05.992693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.587 [2024-12-05 14:03:05.992700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.587 [2024-12-05 14:03:05.992709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.587 [2024-12-05 14:03:06.004938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.587 [2024-12-05 14:03:06.005375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-12-05 14:03:06.005394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.587 [2024-12-05 14:03:06.005402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.587 [2024-12-05 14:03:06.005576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.587 [2024-12-05 14:03:06.005752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.587 [2024-12-05 14:03:06.005763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.587 [2024-12-05 14:03:06.005771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.587 [2024-12-05 14:03:06.005782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.587 [2024-12-05 14:03:06.018017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.587 [2024-12-05 14:03:06.018379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-12-05 14:03:06.018397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.587 [2024-12-05 14:03:06.018404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.587 [2024-12-05 14:03:06.018577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.587 [2024-12-05 14:03:06.018751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.588 [2024-12-05 14:03:06.018760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.588 [2024-12-05 14:03:06.018767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.588 [2024-12-05 14:03:06.018774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.588 [2024-12-05 14:03:06.031009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.588 [2024-12-05 14:03:06.031346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-12-05 14:03:06.031363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.588 [2024-12-05 14:03:06.031376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.588 [2024-12-05 14:03:06.031549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.588 [2024-12-05 14:03:06.031723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.588 [2024-12-05 14:03:06.031734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.588 [2024-12-05 14:03:06.031741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.588 [2024-12-05 14:03:06.031748] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:23.588 [2024-12-05 14:03:06.044019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.588 [2024-12-05 14:03:06.044355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-12-05 14:03:06.044378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.588 [2024-12-05 14:03:06.044387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.588 [2024-12-05 14:03:06.044561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.588 [2024-12-05 14:03:06.044736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.588 [2024-12-05 14:03:06.044747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.588 [2024-12-05 14:03:06.044758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.588 [2024-12-05 14:03:06.044766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.588 [2024-12-05 14:03:06.056999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.588 [2024-12-05 14:03:06.057296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-12-05 14:03:06.057313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.588 [2024-12-05 14:03:06.057321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.588 [2024-12-05 14:03:06.057500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.588 [2024-12-05 14:03:06.057675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.588 [2024-12-05 14:03:06.057685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.588 [2024-12-05 14:03:06.057692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.588 [2024-12-05 14:03:06.057699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.588 [2024-12-05 14:03:06.070095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.588 [2024-12-05 14:03:06.070395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-12-05 14:03:06.070415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.588 [2024-12-05 14:03:06.070424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.588 [2024-12-05 14:03:06.070597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.588 [2024-12-05 14:03:06.070773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.588 [2024-12-05 14:03:06.070783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.588 [2024-12-05 14:03:06.070790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.588 [2024-12-05 14:03:06.070797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:23.588 [2024-12-05 14:03:06.083205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.588 [2024-12-05 14:03:06.083485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-12-05 14:03:06.083503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.588 [2024-12-05 14:03:06.083512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.588 [2024-12-05 14:03:06.083689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.588 [2024-12-05 14:03:06.083865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.588 [2024-12-05 14:03:06.083879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.588 [2024-12-05 14:03:06.083886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.588 [2024-12-05 14:03:06.083892] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.588 [2024-12-05 14:03:06.086104] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:23.588 [2024-12-05 14:03:06.096201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.588 [2024-12-05 14:03:06.096741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-12-05 14:03:06.096762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.588 [2024-12-05 14:03:06.096770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.588 [2024-12-05 14:03:06.096956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.588 [2024-12-05 14:03:06.097143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.588 [2024-12-05 14:03:06.097154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.588 [2024-12-05 14:03:06.097161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.588 [2024-12-05 14:03:06.097169] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.588 [2024-12-05 14:03:06.109271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.588 [2024-12-05 14:03:06.109723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-12-05 14:03:06.109742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.588 [2024-12-05 14:03:06.109750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.588 [2024-12-05 14:03:06.109923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.588 [2024-12-05 14:03:06.110104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.588 [2024-12-05 14:03:06.110114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.588 [2024-12-05 14:03:06.110120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.588 [2024-12-05 14:03:06.110128] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.588 [2024-12-05 14:03:06.122383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.588 [2024-12-05 14:03:06.122811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-12-05 14:03:06.122829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.588 [2024-12-05 14:03:06.122837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.588 Malloc0 00:31:23.588 [2024-12-05 14:03:06.123010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.588 [2024-12-05 14:03:06.123189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.588 [2024-12-05 14:03:06.123201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.588 [2024-12-05 14:03:06.123207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.588 [2024-12-05 14:03:06.123215] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.588 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:23.589 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.589 [2024-12-05 14:03:06.135487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.589 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:23.589 [2024-12-05 14:03:06.135815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-12-05 14:03:06.135833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5510 with addr=10.0.0.2, port=4420 00:31:23.589 [2024-12-05 14:03:06.135842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5510 is same with the state(6) to be set 00:31:23.589 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.589 [2024-12-05 14:03:06.136015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5510 (9): Bad file descriptor 00:31:23.589 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:23.589 [2024-12-05 14:03:06.136190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:23.589 [2024-12-05 14:03:06.136201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:23.589 [2024-12-05 14:03:06.136209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:23.589 [2024-12-05 14:03:06.136216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:23.589 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.589 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.589 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.589 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:23.589 [2024-12-05 14:03:06.146699] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.589 [2024-12-05 14:03:06.148624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:23.589 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.589 14:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 820797 00:31:23.847 [2024-12-05 14:03:06.213574] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:31:25.480 4764.14 IOPS, 18.61 MiB/s [2024-12-05T13:03:09.003Z] 5590.88 IOPS, 21.84 MiB/s [2024-12-05T13:03:09.941Z] 6232.33 IOPS, 24.35 MiB/s [2024-12-05T13:03:10.878Z] 6756.00 IOPS, 26.39 MiB/s [2024-12-05T13:03:11.815Z] 7185.55 IOPS, 28.07 MiB/s [2024-12-05T13:03:12.751Z] 7544.33 IOPS, 29.47 MiB/s [2024-12-05T13:03:13.688Z] 7849.00 IOPS, 30.66 MiB/s [2024-12-05T13:03:15.065Z] 8105.86 IOPS, 31.66 MiB/s 00:31:32.478 Latency(us) 00:31:32.478 [2024-12-05T13:03:15.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.478 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:32.478 Verification LBA range: start 0x0 length 0x4000 00:31:32.478 Nvme1n1 : 15.00 8323.29 32.51 13148.48 0.00 5942.01 628.05 17226.61 00:31:32.478 [2024-12-05T13:03:15.065Z] =================================================================================================================== 00:31:32.478 [2024-12-05T13:03:15.065Z] Total : 8323.29 32.51 13148.48 0.00 5942.01 628.05 17226.61 00:31:32.478 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:32.478 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:32.478 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.478 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:32.478 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.478 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:32.478 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:32.478 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:32.478 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:31:32.478 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:32.479 rmmod nvme_tcp 00:31:32.479 rmmod nvme_fabrics 00:31:32.479 rmmod nvme_keyring 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 821946 ']' 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 821946 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 821946 ']' 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 821946 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 821946 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 821946' 00:31:32.479 killing process with pid 821946 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 821946 00:31:32.479 14:03:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 821946 00:31:32.736 14:03:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:32.736 14:03:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:32.736 14:03:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:32.736 14:03:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:31:32.736 14:03:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:31:32.736 14:03:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:32.736 14:03:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:31:32.736 14:03:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:32.736 14:03:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:32.736 14:03:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.736 14:03:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.736 14:03:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.636 14:03:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:34.636 00:31:34.636 real 0m26.135s 00:31:34.636 user 1m1.107s 00:31:34.636 sys 0m6.781s 00:31:34.636 14:03:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:34.636 14:03:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:34.636 ************************************ 00:31:34.636 END TEST nvmf_bdevperf 00:31:34.636 ************************************ 00:31:34.895 14:03:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.896 ************************************ 00:31:34.896 START TEST nvmf_target_disconnect 00:31:34.896 ************************************ 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:34.896 * Looking for test storage... 00:31:34.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:34.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.896 --rc genhtml_branch_coverage=1 00:31:34.896 --rc genhtml_function_coverage=1 00:31:34.896 --rc genhtml_legend=1 00:31:34.896 --rc geninfo_all_blocks=1 00:31:34.896 --rc geninfo_unexecuted_blocks=1 00:31:34.896 00:31:34.896 ' 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:34.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.896 --rc genhtml_branch_coverage=1 00:31:34.896 --rc genhtml_function_coverage=1 00:31:34.896 --rc genhtml_legend=1 00:31:34.896 --rc geninfo_all_blocks=1 00:31:34.896 --rc geninfo_unexecuted_blocks=1 00:31:34.896 00:31:34.896 ' 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:34.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.896 --rc genhtml_branch_coverage=1 00:31:34.896 --rc genhtml_function_coverage=1 00:31:34.896 --rc genhtml_legend=1 00:31:34.896 --rc geninfo_all_blocks=1 00:31:34.896 --rc geninfo_unexecuted_blocks=1 00:31:34.896 00:31:34.896 ' 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:34.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.896 --rc genhtml_branch_coverage=1 00:31:34.896 --rc genhtml_function_coverage=1 00:31:34.896 --rc genhtml_legend=1 00:31:34.896 --rc geninfo_all_blocks=1 00:31:34.896 --rc geninfo_unexecuted_blocks=1 00:31:34.896 00:31:34.896 ' 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.896 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:34.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:34.897 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:31:35.156 14:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:41.750 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:41.750 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:41.750 Found net devices under 0000:86:00.0: cvl_0_0 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:41.750 Found net devices under 0000:86:00.1: cvl_0_1 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:41.750 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:41.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:41.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:31:41.751 00:31:41.751 --- 10.0.0.2 ping statistics --- 00:31:41.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.751 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:41.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:41.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:31:41.751 00:31:41.751 --- 10.0.0.1 ping statistics --- 00:31:41.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.751 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:41.751 ************************************ 00:31:41.751 START TEST nvmf_target_disconnect_tc1 00:31:41.751 ************************************ 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:41.751 [2024-12-05 14:03:23.559734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.751 [2024-12-05 14:03:23.559786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f8ac0 with addr=10.0.0.2, port=4420 00:31:41.751 [2024-12-05 14:03:23.559809] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:41.751 [2024-12-05 14:03:23.559820] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:41.751 [2024-12-05 14:03:23.559827] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:31:41.751 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:41.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:41.751 Initializing NVMe Controllers 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:41.751 00:31:41.751 real 0m0.124s 00:31:41.751 user 0m0.047s 00:31:41.751 sys 0m0.072s 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:41.751 ************************************ 00:31:41.751 END TEST nvmf_target_disconnect_tc1 00:31:41.751 ************************************ 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:41.751 ************************************ 00:31:41.751 START TEST nvmf_target_disconnect_tc2 00:31:41.751 ************************************ 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=827454 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 827454 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 827454 ']' 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:41.751 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:41.751 [2024-12-05 14:03:23.700912] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:31:41.751 [2024-12-05 14:03:23.700955] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:41.751 [2024-12-05 14:03:23.779975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:41.751 [2024-12-05 14:03:23.821409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:41.751 [2024-12-05 14:03:23.821444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:41.751 [2024-12-05 14:03:23.821450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:41.751 [2024-12-05 14:03:23.821456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:41.751 [2024-12-05 14:03:23.821461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:41.751 [2024-12-05 14:03:23.823096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:41.752 [2024-12-05 14:03:23.823209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:41.752 [2024-12-05 14:03:23.823318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:41.752 [2024-12-05 14:03:23.823319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:41.752 Malloc0 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:41.752 [2024-12-05 14:03:23.987021] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.752 14:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:41.752 [2024-12-05 14:03:24.015987] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=827526 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:41.752 14:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:43.661 14:03:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 827454 00:31:43.661 14:03:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 [2024-12-05 14:03:26.044498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Write completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 [2024-12-05 14:03:26.044700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.661 starting I/O failed 00:31:43.661 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 [2024-12-05 14:03:26.044892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Write completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 Read completed with error (sct=0, sc=8) 00:31:43.662 starting I/O failed 00:31:43.662 [2024-12-05 14:03:26.045088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:43.662 [2024-12-05 14:03:26.045288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.045310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.045465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.045477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.045628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.045640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.045730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.045741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.045909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.045920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.046011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.046044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.046296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.046331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.046501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.046536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.046731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.046765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.047047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.047081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.047278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.047312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.047578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.047613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.047747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.047780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.048046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.048079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.048297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.048330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.048496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.048531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.048670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.048703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.048923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.048956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.049077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.049110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.049435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.049470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.049606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.049640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.049825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.049859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.050114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.050147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.662 qpair failed and we were unable to recover it. 00:31:43.662 [2024-12-05 14:03:26.050392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.662 [2024-12-05 14:03:26.050427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.050722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.050756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.050930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.050945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.051041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.051052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.051204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.051216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.051375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.051388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.051588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.051601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.051833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.051845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.051985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.051997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.052168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.052180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.052271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.052282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.052423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.052434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.052576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.052586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.052795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.052805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.052939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.052949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.053233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.053265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.053490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.053523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.053792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.053825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.054016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.054048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.054259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.054291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.054482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.054516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.054704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.054737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.054956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.054967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.055125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.055157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.055391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.055425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.055662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.055694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.055880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.055913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.056098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.056130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.056394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.056427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.056681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.056714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.056934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.056966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.057168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.057178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.057334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.057345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.057508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.057519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.057710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.057720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.057913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.057923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.058167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.058200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.058464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.058498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.663 qpair failed and we were unable to recover it. 00:31:43.663 [2024-12-05 14:03:26.058764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.663 [2024-12-05 14:03:26.058775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.058973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.059005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.059294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.059326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.059535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.059570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.059830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.059868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.060088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.060120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.060357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.060401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.060529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.060561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.060751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.060783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.060996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.061029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.061199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.061230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.061462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.061496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.061668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.061679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.061762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.061772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.061867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.061899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.062109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.062142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.062412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.062445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.062578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.062610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.062864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.062897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.063184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.063215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.063505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.063539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.063667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.063699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.063944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.063975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.064271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.064304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.064593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.064627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.064846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.064879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.064997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.065030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.065239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.065271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.065442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.065476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.065666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.065698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.065887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.065920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.066133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.066164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.066277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.066310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.066587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.066622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.066808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.066840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.067027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.067059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.067302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.067335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.067568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.067603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.664 qpair failed and we were unable to recover it. 00:31:43.664 [2024-12-05 14:03:26.067865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.664 [2024-12-05 14:03:26.067897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.068196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.068229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.068466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.068500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.068769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.068801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.068977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.069016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.069281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.069313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.069565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.069604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.069844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.069877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.070138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.070170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.070413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.070446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.070752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.070784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.070975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.071007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.071266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.071298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.071479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.071513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.071641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.071672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.071934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.071965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.072141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.072174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.072482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.072515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.072755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.072789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.073054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.073087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.073302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.073335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.073506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.073542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.073765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.073798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.074064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.074097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.074430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.074465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.074731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.074764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.074976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.075008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.075135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.075168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.075425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.075459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.075697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.075729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.075967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.075999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.076264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.076297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.665 [2024-12-05 14:03:26.076486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.665 [2024-12-05 14:03:26.076519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.665 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.076711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.076743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.076985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.077018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.077204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.077236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.077417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.077462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.077732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.077766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.078039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.078071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.078363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.078406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.078616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.078649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.078821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.078853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.079041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.079073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.079337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.079379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.079657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.079691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.079937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.079968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.080144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.080184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.080422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.080455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.080719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.080751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.080937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.080970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.081140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.081175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.081355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.081398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.081642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.081674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.081911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.081943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.082206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.082238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.082485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.082519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.082721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.082753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.082940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.082972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.083213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.083247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.083433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.083467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.083643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.083676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.083874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.083908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.084143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.084175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.084302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.084334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.084522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.084556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.084691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.666 [2024-12-05 14:03:26.084723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.666 qpair failed and we were unable to recover it. 00:31:43.666 [2024-12-05 14:03:26.084847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.084878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.085067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.085105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.085293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.085325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.085533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.085567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.085814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.085847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.086142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.086174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.086446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.086480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.086756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.086790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.087073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.087104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.087245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.087277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.087470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.087508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.087747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.087779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.087962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.087994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.088283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.088317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.088517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.088550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.088765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.088797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.088971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.089003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.089188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.089220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.089487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.089520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.089764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.089796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.090077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.090115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.090309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.090341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.090559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.090593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.090834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.090866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.091122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.091155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.091350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.091394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.091643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.091675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.091965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.091997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.092268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.092300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.092505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.092539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.092795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.092827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.093042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.093074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.093323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.093355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.093558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.093590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.093816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.093849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.094033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.094066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.094304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.667 [2024-12-05 14:03:26.094336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.667 qpair failed and we were unable to recover it. 00:31:43.667 [2024-12-05 14:03:26.094639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.094673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.094929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.094961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.095156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.095188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.095306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.095339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.095615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.095649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.095841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.095873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.096128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.096160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.096390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.096425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.096642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.096674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.096942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.096975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.097191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.097225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.097488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.097521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.097812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.097844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.098055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.098087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.098327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.098358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.098580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.098613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.098877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.098908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.099048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.099080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.099382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.099416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.099599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.099631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.099833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.099865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.100128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.100161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.100330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.100362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.100611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.100649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.100893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.100925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.101138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.101169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.101353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.101397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.101653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.101685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.101953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.101985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.102108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.102141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.102411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.102445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.102628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.102661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.102866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.102898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.103078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.668 [2024-12-05 14:03:26.103110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.668 qpair failed and we were unable to recover it. 00:31:43.668 [2024-12-05 14:03:26.103311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.103344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.103597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.103630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.103836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.103868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.104132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.104165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.104414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.104465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.104775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.104808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.105048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.105080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.105376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.105409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.105676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.105708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.105949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.105980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.106190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.106222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.106491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.106525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.106793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.106825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.107076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.107108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.107317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.107350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.107567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.107600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.107919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.108006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.108306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.108344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.108650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.108684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.108929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.108961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.109146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.109178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.109316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.109347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.109542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.109574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.109843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.109875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.110140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.110171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.110421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.110455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.110749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.110780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.110884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.110915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.111181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.111213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.111511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.111559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.111810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.111842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.112027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.112059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.112330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.669 [2024-12-05 14:03:26.112363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.669 qpair failed and we were unable to recover it. 00:31:43.669 [2024-12-05 14:03:26.112641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.112673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.112951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.112982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.113175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.113208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.113490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.113521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.113781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.113813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.114001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.114032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.114268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.114299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.114562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.114595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.114785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.114817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.115008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.115040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.115231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.115262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.115545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.115578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.115856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.115888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.116162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.116193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.116477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.116509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.116713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.116745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.117008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.117039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.117216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.117247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.117511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.117543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.117756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.117787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.118057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.118089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.118285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.118316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.118556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.118589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.118891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.118924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.119139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.119171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.119436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.119469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.119760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.119792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.120065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.120102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.120295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.120326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.120480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.120514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.120780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.670 [2024-12-05 14:03:26.120811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.670 qpair failed and we were unable to recover it. 00:31:43.670 [2024-12-05 14:03:26.121020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.121052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.121347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.121388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.121582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.121615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.121788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.121819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.122092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.122124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.122308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.122346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.122610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.122643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.122929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.122962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.123207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.123239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.123422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.123455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.123655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.123687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.123926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.123958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.124232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.124265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.124509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.124541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.124718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.124749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.125039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.125072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.125347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.125389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.125592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.125624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.125868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.125901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.126178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.126211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.126483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.126516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.126827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.126860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.127128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.127161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.127352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.127395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.127644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.127675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.127877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.127910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.128153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.128185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.128450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.128483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.128778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.128809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.129071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.671 [2024-12-05 14:03:26.129104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.671 qpair failed and we were unable to recover it. 00:31:43.671 [2024-12-05 14:03:26.129243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.129273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.129489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.129522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.129782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.129868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.130093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.130130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.130406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.130442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.130636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.130669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.130854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.130886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.131020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.131052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.131266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.131297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.131589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.131622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.131911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.131944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.132216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.132248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.132537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.132570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.132843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.132876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.133158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.133190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.133446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.133487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.133803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.133834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.134027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.134058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.134301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.134333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.134622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.134655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.134849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.134881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.135173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.135204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.135496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.135528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.135799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.135833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.136048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.136080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.136358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.136398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.136669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.136701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.136919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.136951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.137198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.137229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.137421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.137455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.137672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.137704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.137896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.672 [2024-12-05 14:03:26.137928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.672 qpair failed and we were unable to recover it. 00:31:43.672 [2024-12-05 14:03:26.138188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.138220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.138515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.138549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.138817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.138849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.139061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.139093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.139234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.139266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.139555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.139587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.139882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.139913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.140099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.140131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.140309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.140340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.140615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.140648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.140923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.140996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.141276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.141314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.141616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.141652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.141922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.141954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.142166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.142199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.142468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.142502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.142792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.142825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.143021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.143054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.143299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.143331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.143612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.143645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.143891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.143925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.144230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.144268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.144510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.144543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.144877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.144909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.145135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.145173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.145426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.145460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.145724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.145758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.146000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.146031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.146286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.146319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.146521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.146555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.146742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.146774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.146968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.147000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.147197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.147230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.147496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.147530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.147721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.147754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.147941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.147973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.148263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.148294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.673 [2024-12-05 14:03:26.148587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.673 [2024-12-05 14:03:26.148621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.673 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.148810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.148842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.149128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.149160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.149417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.149451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.149658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.149692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.149936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.149967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.150241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.150272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.150564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.150598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.150869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.150902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.151119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.151151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.151294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.151327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.151631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.151665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.151918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.151951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.152173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.152212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.152460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.152494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.152752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.152785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.152985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.153018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.153278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.153311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.153449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.153482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.153756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.153789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.154051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.154083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.154323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.154355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.154638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.154689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.154879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.154911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.155199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.155230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.155510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.155545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.155731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.155765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.155982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.156014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.156264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.156297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.156482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.156516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.156795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.156827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.157117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.157150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.157344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.157385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.157654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.157687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.674 [2024-12-05 14:03:26.157824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.674 [2024-12-05 14:03:26.157857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.674 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.158078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.158110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.158409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.158443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.158706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.158740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.159019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.159051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.159270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.159302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.159593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.159626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.159817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.159849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.160042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.160074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.160266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.160298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.160543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.160576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.160853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.160885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.161072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.161104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.161364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.161406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.161684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.161715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.162000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.162032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.162315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.162348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.162654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.162688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.162963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.162995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.163258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.163297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.163591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.163626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.163811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.163842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.164032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.164065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.164311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.164342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.164639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.164692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.164985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.165016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.165272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.165304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.165477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.165512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.165651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.165683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.165814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.165845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.166104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.166137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.166314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.166347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.166546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.166579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.166850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.675 [2024-12-05 14:03:26.166883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.675 qpair failed and we were unable to recover it. 00:31:43.675 [2024-12-05 14:03:26.167177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.167209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.167480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.167513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.167804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.167838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.168110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.168141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.168403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.168436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.168735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.168768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.169049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.169082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.169259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.169291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.169494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.169527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.169746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.169779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.170052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.170083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.170351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.170407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.170684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.170717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.170932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.170965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.171244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.171276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.171556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.171590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.171874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.171905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.172099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.172133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.172312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.172343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.172638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.172672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.172947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.172980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.173245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.173276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.173535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.173568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.173868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.173901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.174141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.174173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.174424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.174463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.174762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.174813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.175086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.175118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.175395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.175429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.175715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.175749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.175997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.176029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.176177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.176209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.176416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.176451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.176713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.176746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.177038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.177071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.177304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.177337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.177584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.177618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.177868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.676 [2024-12-05 14:03:26.177901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.676 qpair failed and we were unable to recover it. 00:31:43.676 [2024-12-05 14:03:26.178141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.178174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.178456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.178490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.178738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.178770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.179041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.179074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.179326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.179359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.179670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.179704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.179924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.179956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.180232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.180270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.180552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.180586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.180861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.180895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.181186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.181220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.181496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.181529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.181813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.181847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.182116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.182148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.182444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.182477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.182772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.182806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.183076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.183108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.183298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.183331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.183616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.183650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.183843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.183876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.184054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.184085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.184359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.184405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.184691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.184723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.184937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.184970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.185154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.185187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.185394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.185428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.185591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.185625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.185932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.185972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.186206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.186245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.186547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.186581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.186866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.186899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.187156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.187188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.187335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.187392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.187530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.187563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.187718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.187751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.187976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.188008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.188211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.188244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.677 [2024-12-05 14:03:26.188385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.677 [2024-12-05 14:03:26.188419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.677 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.188552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.188585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.188838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.188874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.189091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.189124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.189384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.189419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.189703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.189736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.189964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.189998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.190206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.190239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.190449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.190483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.190682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.190715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.190967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.191004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.191217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.191251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.191441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.191475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.191755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.191791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.191950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.191982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.192202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.192235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.192438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.192472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.192679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.192712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.192962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.192995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.193261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.193294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.193417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.193451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.193586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.193619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.193895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.193928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.194209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.194244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.194508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.194542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.194780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.194813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.195108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.195139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.195335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.195380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.195576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.195609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.195864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.195897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.196036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.196075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.196352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.196397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.196681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.196715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.196909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.196941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.197190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.197223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.197453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.197486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.197684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.197718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.197898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.197931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.678 [2024-12-05 14:03:26.198228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.678 [2024-12-05 14:03:26.198264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.678 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.198484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.198518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.198668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.198701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.198961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.198994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.199245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.199278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.199474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.199507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.199789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.199822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.200013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.200045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.200156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.200188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.200492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.200530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.200676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.200710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.200936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.200968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.201165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.201199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.201404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.201439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.201719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.201752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.202011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.202045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.202244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.202275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.202466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.202500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.202658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.202690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.202906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.202939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.203143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.203175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.203458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.203492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.203639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.203672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.203924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.203956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.204163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.204195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.204491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.204525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.204795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.204828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.205054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.205086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.205219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.205251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.205508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.205541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.205843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.205877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.206184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.206217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.206497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.206538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.206761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.206795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.206998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.207029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.207351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.207395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.207631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.207665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.207934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.679 [2024-12-05 14:03:26.207968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.679 qpair failed and we were unable to recover it. 00:31:43.679 [2024-12-05 14:03:26.208220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.208253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.208380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.208414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.208630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.208663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.208864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.208895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.209125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.209159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.209360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.209428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.209567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.209600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.209807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.209840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.210065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.210099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.210280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.210313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.210608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.210643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.210858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.210891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.211203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.211236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.211428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.211462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.212553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.212604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.212840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.212879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.213100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.213135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.213347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.213396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.213544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.213577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.213846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.213881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.214043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.214078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.214340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.214388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.214545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.214578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.214831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.214865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.215049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.215085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.215288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.215320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.215599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.215633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.215835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.215871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.216142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.216175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.216483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.216521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.216723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.216758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.216982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.217015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.217321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.217357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.217510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.680 [2024-12-05 14:03:26.217544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.680 qpair failed and we were unable to recover it. 00:31:43.680 [2024-12-05 14:03:26.217775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.217815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.217962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.217996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.218196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.218229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.218431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.218466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.218689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.218723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.218854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.218888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.219220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.219253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.219449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.219483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.219624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.219658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.219891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.219924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.220143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.220176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.220386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.220423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.220639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.220673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.220887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.220919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.221146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.221182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.221403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.221439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.221722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.221755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.221910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.221945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.222091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.222124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.222389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.222426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.222573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.222607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.222752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.222786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.223027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.223059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.223281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.223314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.223514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.223549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.223707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.223739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.223935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.223968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.224250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.224283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.224538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.224572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.224723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.224755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.224892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.224926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.225244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.225276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.225508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.225542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.225768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.225802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.225953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.225985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.226179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.226212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.226494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.226528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.226722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.681 [2024-12-05 14:03:26.226754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.681 qpair failed and we were unable to recover it. 00:31:43.681 [2024-12-05 14:03:26.226896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.226934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.227213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.227246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.227435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.227475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.227757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.227790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.227940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.227973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.228173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.228212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.228495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.228528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.228713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.228747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.229089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.229122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.229343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.229391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.229540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.229573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.229756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.229788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.229996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.230029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.230230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.230263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.230481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.230516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.230713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.230744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.230962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.230995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.231197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.231229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.231454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.231490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.231629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.231662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.231798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.231829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.232100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.232136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.232329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.232363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.232562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.232596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.232804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.232836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.233159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.233192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.233469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.233503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.233644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.233676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.234005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.234037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.234314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.234398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.234631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.234668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.234866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.234899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.235140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.235172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.235451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.235485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.235696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.235729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.235923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.235956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.682 [2024-12-05 14:03:26.236231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.682 [2024-12-05 14:03:26.236264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.682 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.236498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.236531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.236718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.236751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.236884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.236917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.237127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.237160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.237458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.237492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.237701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.237745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.237895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.237927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.238040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.238072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.238345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.238387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.238598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.238631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.238857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.238891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.239158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.239192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.239504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.239537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.239733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.239765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.240087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.240120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.240350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.240393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.240646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.240678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.240877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.240911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.241191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.241224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.241522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.241557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.241679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.969 [2024-12-05 14:03:26.241710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.969 qpair failed and we were unable to recover it. 00:31:43.969 [2024-12-05 14:03:26.241935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.241968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.242190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.242223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.242365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.242408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.242665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.242699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.242904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.242937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.243209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.243242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.243525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.243559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.243781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.243814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.244088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.244120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.244382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.244416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.244623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.244656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.244810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.244842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.245068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.245100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.245305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.245338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.245553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.245586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.245842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.245874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.246114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.246146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.246399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.246433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.246642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.246674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.246820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.246853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.247070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.247102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.247245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.247277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.247501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.247533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.247737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.247770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.247989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.248028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.248235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.248267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.248521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.248554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.248762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.248797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.248990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.249023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.249216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.249252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.249485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.249520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.249729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.249761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.249961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.249993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.250282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.250314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.250533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.250566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.250767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.250799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.250974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.251007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.970 qpair failed and we were unable to recover it. 00:31:43.970 [2024-12-05 14:03:26.251258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.970 [2024-12-05 14:03:26.251290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.251502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.251536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.251811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.251844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.252121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.252154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.252404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.252437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.252652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.252685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.252888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.252920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.253140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.253174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.253300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.253332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.253546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.253581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.253789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.253822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.254046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.254079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.254289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.254323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.254587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.254620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.254834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.254867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.255131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.255164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.255375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.255409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.255667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.255700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.255892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.255924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.256230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.256262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.256554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.256588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.256739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.256772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.256911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.256943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.257253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.257291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.257520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.257554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.257833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.257865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.258146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.258179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.258389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.258429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.258633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.258666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.258875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.258908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.259199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.259231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.259443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.259477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.259636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.259670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.259814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.259846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.260054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.260086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.260220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.260252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.260392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.260425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.260611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.971 [2024-12-05 14:03:26.260643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.971 qpair failed and we were unable to recover it. 00:31:43.971 [2024-12-05 14:03:26.260828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.260861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.261159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.261192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.261466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.261499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.261790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.261823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.262129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.262160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.262301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.262333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.262500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.262533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.262806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.262838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.263121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.263153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.263506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.263539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.263737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.263770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.263996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.264027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.264218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.264251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.264531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.264564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.264703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.264735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.264941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.264973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.265257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.265290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.265519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.265553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.265699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.265731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.265924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.265956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.266142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.266174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.266385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.266419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.266630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.266662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.266916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.266948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.267220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.267253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.267454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.267487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.267693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.267726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.267922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.267955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.268228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.268260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.268538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.268584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.268840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.268872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.269180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.269212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.269420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.269453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.269715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.269748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.269898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.269930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.270204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.270236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.270446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.270480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.270771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.972 [2024-12-05 14:03:26.270809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.972 qpair failed and we were unable to recover it. 00:31:43.972 [2024-12-05 14:03:26.271040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.271072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.271392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.271426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.271587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.271619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.271766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.271799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.272073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.272105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.272339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.272396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.272598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.272631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.272778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.272810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.272950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.272981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.273192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.273223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.273475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.273509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.273664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.273695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.273875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.273907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.274194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.274230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.274374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.274407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.274617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.274649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.274806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.274838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.274994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.275026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.275234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.275266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.275466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.275499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.275683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.275715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.275930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.275962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.276241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.276273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.276592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.276626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.276877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.276909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.277184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.277216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.277366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.277409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.277612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.277644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.277805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.277838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.278116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.278149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.278350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.278406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.278562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.278601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.278900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.278932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.279220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.279253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.973 [2024-12-05 14:03:26.279542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.973 [2024-12-05 14:03:26.279576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.973 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.279717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.279750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.279958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.279990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.280192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.280225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.280477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.280510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.280704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.280736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.281017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.281050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.281255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.281287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.281431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.281464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.281666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.281698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.281903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.281936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.282182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.282214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.282412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.282446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.282633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.282665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.282866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.282898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.283124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.283156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.283417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.283451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.283637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.283669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.283822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.283855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.283974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.284006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.284154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.284186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.284322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.284355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.284557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.284589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.284732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.284764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.284974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.285012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.285208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.285238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.285381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.285415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.285572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.285605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.285809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.285840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.286057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.286089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.974 [2024-12-05 14:03:26.286349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.974 [2024-12-05 14:03:26.286408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.974 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.286675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.286708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.286930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.286961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.287177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.287210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.287475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.287508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.287626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.287658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.287935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.287967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.288206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.288238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.288447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.288482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.288643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.288676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.288864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.288896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.289190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.289221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.289470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.289503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.289663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.289695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.289830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.289862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.290086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.290119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.290320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.290352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.290525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.290558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.290776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.290808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.291106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.291138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.291328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.291361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.291619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.291652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.291906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.291938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.292141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.292173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.292307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.292341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.292551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.292584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.292782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.292813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.293022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.293055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.293379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.293412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.293537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.293569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.293707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.293739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.293932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.293964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.294191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.294223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.294426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.294460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.294715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.294753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.294900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.294931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.295247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.295280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.295490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.975 [2024-12-05 14:03:26.295525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.975 qpair failed and we were unable to recover it. 00:31:43.975 [2024-12-05 14:03:26.295832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.295865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.296019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.296051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.296269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.296303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.296518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.296552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.296872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.296905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.297054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.297087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.297309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.297342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.297551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.297585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.297781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.297813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.298091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.298124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.298392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.298428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.298645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.298676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.298928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.298962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.299191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.299224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.299417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.299452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.299704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.299737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.300031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.300065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.300273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.300305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.300519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.300554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.300827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.300861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.301007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.301041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.301195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.301226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.301431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.301466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.301632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.301665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.301924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.301956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.302158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.302190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.302393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.302429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.302639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.302671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.302860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.302893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.303106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.303139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.303413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.303453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.303725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.303761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.304066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.304100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.304236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.304268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.304497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.976 [2024-12-05 14:03:26.304530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.976 qpair failed and we were unable to recover it. 00:31:43.976 [2024-12-05 14:03:26.304804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.304838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.305128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.305166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.305381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.305418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.305623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.305657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.305860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.305892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.306080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.306115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.306311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.306342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.306562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.306595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.306732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.306764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.306896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.306929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.307202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.307234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.307520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.307554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.307741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.307775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.308074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.308107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.308320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.308354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.308593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.308629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.308833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.308866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.309073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.309108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.309359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.309399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.309609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.309640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.309846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.309879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.310028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.310061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.310278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.310310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.310549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.310582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.310718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.310751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.310966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.310998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.311166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.311200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.311411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.311445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.311652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.311684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.311895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.311929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.312129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.312162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.312482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.312517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.312653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.312686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.312847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.312881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.313118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.313150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.313338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.313379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.313532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.313565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.313762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.977 [2024-12-05 14:03:26.313794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.977 qpair failed and we were unable to recover it. 00:31:43.977 [2024-12-05 14:03:26.313958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.313990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.314226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.314259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.314518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.314552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.314713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.314753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.314910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.314945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.315144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.315178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.315506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.315542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.315743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.315776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.316001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.316033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.316259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.316293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.316515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.316548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.316730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.316764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.316912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.316947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.317134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.317165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.317431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.317467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.317653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.317685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.317912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.317946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.318171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.318203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.318340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.318379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.318505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.318538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.318681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.318714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.318866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.318901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.319176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.319210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.319357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.319421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.319625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.319659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.319920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.319952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.320126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.320159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.320288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.320320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.320551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.320586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.320786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.320819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.321032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.978 [2024-12-05 14:03:26.321066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.978 qpair failed and we were unable to recover it. 00:31:43.978 [2024-12-05 14:03:26.321203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.321235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.321528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.321562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.321763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.321798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.322009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.322041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.322172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.322204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.322515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.322549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.322754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.322788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.322935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.322967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.323182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.323214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.323521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.323557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.323802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.323834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.324077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.324110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.324429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.324469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.324681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.324715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.324832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.324864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.325068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.325100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.325390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.325426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.325557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.325589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.325794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.325826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.326039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.326074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.326340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.326383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.326650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.326683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.326887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.326922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.327158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.327189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.327391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.327425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.327634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.327667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.327882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.327915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.328142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.328176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.328322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.328354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.328539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.328574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.328792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.328827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.329047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.329080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.329274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.329309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.329515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.329549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.329747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.329779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.330021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.330056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.330251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.330283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.330553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.979 [2024-12-05 14:03:26.330587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.979 qpair failed and we were unable to recover it. 00:31:43.979 [2024-12-05 14:03:26.330741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.330776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.330934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.330968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.331106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.331141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.331348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.331390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.331534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.331567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.331681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.331713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.331925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.331959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.332295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.332328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.332468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.332502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.332727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.332759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.333023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.333057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.333286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.333321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.333549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.333583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.333779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.333811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.334019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.334059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.334287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.334319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.334459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.334494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.334674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.334707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.334907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.334939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.335163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.335195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.335362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.335417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.335558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.335590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.335731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.335764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.335995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.336029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.336305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.336340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.336513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.336548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.336743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.336775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.336917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.336950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.337255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.337290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.337563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.337597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.337814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.337847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.338083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.338118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.338321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.338354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.338566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.338604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.338805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.338839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.339132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.339167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.339408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.339443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.339597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.980 [2024-12-05 14:03:26.339636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:43.980 qpair failed and we were unable to recover it. 00:31:43.980 [2024-12-05 14:03:26.339903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.339964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.340251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.340279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.340500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.340527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.340769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.340792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.340977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.341000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.341238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.341272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.341532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.341555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.341694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.341718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.341833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.341856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.341970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.341992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.342176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.342198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.342446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.342471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.342681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.342705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.342834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.342857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.342965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.342995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.343168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.343190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.343316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.343338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.343601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.343625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.343804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.343827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.344115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.344138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.344446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.344471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.344660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.344683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.344868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.344890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.345099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.345122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.345315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.345360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.345534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.345570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.345821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.345853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.346026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.346061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.346364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.346409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.346642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.346675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.346872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.346901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.347096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.347119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.347384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.347409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.347583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.347606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.347805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.347839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.348048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.348081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.348304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.348340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.348561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.981 [2024-12-05 14:03:26.348584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.981 qpair failed and we were unable to recover it. 00:31:43.981 [2024-12-05 14:03:26.348819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.348843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.349148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.349170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.349359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.349393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.349584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.349606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.349891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.349925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.350133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.350166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.350431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.350466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.350671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.350695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.350870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.350892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.351191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.351215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.351526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.351549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.351740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.351765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.351957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.351979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.352237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.352260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.352436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.352461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.352709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.352743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.352869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.352901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.353104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.353137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.353332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.353383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.353584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.353625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.353819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.353843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.353946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.353973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.354242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.354265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.354475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.354499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.354691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.354717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.354913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.354935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.355108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.355130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.355256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.355278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.355487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.355512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.355632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.355655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.355860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.355883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.356140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.356164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.356384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.356408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.356700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.356773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.357073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.357111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.357315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.357349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.357625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.357663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.357924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.357958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.982 [2024-12-05 14:03:26.358167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.982 [2024-12-05 14:03:26.358200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.982 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.358395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.358430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.358740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.358776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.358911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.358943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.359079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.359114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.359306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.359339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.359601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.359636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.359783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.359815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.360031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.360075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.360267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.360300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.360571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.360607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.360816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.360850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.361060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.361094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.361286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.361317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.361476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.361510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.361714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.361747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.361885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.361917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.362197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.362230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.362446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.362483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.362640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.362672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.362820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.362852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.363143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.363175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.363468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.363502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.363717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.363750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.363956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.363988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.364130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.364163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.364442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.364477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.364605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.364637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.364838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.364871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.365182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.365216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.365357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.365403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.365568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.365601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.365822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.365854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.366046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.366078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.366305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.366338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.983 [2024-12-05 14:03:26.366618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.983 [2024-12-05 14:03:26.366660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.983 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.366835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.366858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.367065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.367098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.367287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.367320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.367485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.367525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.367640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.367669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.367799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.367822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.368105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.368130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.368300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.368324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.368472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.368497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.368610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.368633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.368819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.368842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.369119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.369145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.369304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.369327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.369527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.369551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.369728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.369751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.370030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.370062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.370193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.370225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.370478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.370514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.370719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.370752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.370885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.370918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.371060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.371093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.371394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.371429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.371571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.371607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.371857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.371880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.372072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.372095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.372297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.372321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.372455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.372484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.372608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.372632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.372906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.372931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.373116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.373151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.373410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.373448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.373678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.373712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.373908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.373940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.374088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.374127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.374353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.374414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.374621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.374656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.374852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.374875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.375121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.984 [2024-12-05 14:03:26.375146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.984 qpair failed and we were unable to recover it. 00:31:43.984 [2024-12-05 14:03:26.375393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.375417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.375593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.375617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.375834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.375856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.376079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.376109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.376290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.376312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.376500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.376524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.376732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.376755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.376893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.376916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.377043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.377067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.377263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.377286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.377495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.377518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.377684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.377707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.377914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.377937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.378174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.378197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.378387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.378410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.378580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.378606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.378742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.378764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.378905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.378928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.379106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.379128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.379303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.379325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.379473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.379496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.379686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.379708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.379909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.379932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.380138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.380162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.380425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.380451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.380690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.380713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.380829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.380851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.381135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.381157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.381272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.381295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.381490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.381514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.381644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.381666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.381764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.381786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.381881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.381902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.382090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.382113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.382296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.382318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.382470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.382494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.382667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.382689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.382858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.382881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.985 [2024-12-05 14:03:26.382986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.985 [2024-12-05 14:03:26.383009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.985 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.383272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.383296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.383457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.383481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.383696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.383719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.383920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.383942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.384068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.384095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.384354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.384386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.384519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.384543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.384718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.384741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.384857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.384878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.385072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.385097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.385200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.385220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.385401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.385425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.385539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.385561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.385748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.385772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.385870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.385892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.386226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.386262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.386383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.386418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.386615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.386657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.386857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.386880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.387160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.387183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.387423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.387447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.387573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.387598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.387733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.387755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.387923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.387944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.388115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.388139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.388386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.388408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.388619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.388643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.388828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.388852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.389132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.389154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.389324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.389347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.389498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.389522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.389660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.389686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.389859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.389882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.390023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.390046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.390283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.390306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.390450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.390476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.390590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.390610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.390738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.986 [2024-12-05 14:03:26.390761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.986 qpair failed and we were unable to recover it. 00:31:43.986 [2024-12-05 14:03:26.390860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.390882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.391077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.391098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.391266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.391289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.391448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.391471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.391597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.391626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.391805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.391827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.392051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.392073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.392274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.392297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.392545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.392570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.392703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.392726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.392852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.392875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.393118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.393142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.393302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.393325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.393540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.393564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.393690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.393713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.393826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.393848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.394142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.394165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.394308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.394332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.394514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.394541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.394634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.394658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.394839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.394866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.394990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.395014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.395237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.395258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.395470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.395495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.395676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.395699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.395891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.395917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.396100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.396124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.396378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.396403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.396520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.396544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.396645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.396668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.396857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.396883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.397171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.397200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.397366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.397399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.397537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.397562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.397754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.397779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.987 qpair failed and we were unable to recover it. 00:31:43.987 [2024-12-05 14:03:26.398034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.987 [2024-12-05 14:03:26.398058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.398223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.398247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.398359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.398393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.398624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.398649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.398845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.398869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.399096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.399120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.399288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.399333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.399631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.399666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.399876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.399908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.400071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.400109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.400307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.400342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.400571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.400595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.400785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.400816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.400990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.401013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.401265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.401288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.401561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.401590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.401873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.401907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.402159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.402195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.402422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.402456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.402602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.402635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.402836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.402869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.403062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.403087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.403346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.403375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.403513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.403537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.403674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.403701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.403880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.403905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.404111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.404134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.404237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.404263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.404435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.404459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.404584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.404607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.404743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.404766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.404945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.404968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.405143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.405170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.405334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.405359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.405553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.405577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.405684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.405707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.405959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.405982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.406199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.988 [2024-12-05 14:03:26.406244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.988 qpair failed and we were unable to recover it. 00:31:43.988 [2024-12-05 14:03:26.406386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.406420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.406571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.406602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.406768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.406802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.406955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.406977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.407225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.407257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.407452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.407486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.407653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.407687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.407821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.407852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.408002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.408026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.408134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.408157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.408364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.408394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.408578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.408601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.408712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.408735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.408920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.408943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.409112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.409135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.409249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.409277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.409508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.409531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.409646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.409670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.409889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.409912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.410087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.410111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.410306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.410330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.410455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.410478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.410586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.410611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.410847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.410870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.410999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.411024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.411262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.411284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.411389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.411412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.411594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.411617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.411725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.411748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.411942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.411967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.412229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.412254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.412433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.412456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.412651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.412673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.412845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.412870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.412989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.413013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.413192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.413214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.413491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.413516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.413649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.413673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.989 [2024-12-05 14:03:26.413801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.989 [2024-12-05 14:03:26.413822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.989 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.414070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.414096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.414317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.414341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.414516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.414541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.414675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.414704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.414824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.414849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.414990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.415015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.415218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.415242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.415454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.415480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.415606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.415628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.415808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.415831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.415960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.415982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.416233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.416257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.416527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.416553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.416656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.416677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.416854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.416877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.416998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.417019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.417141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.417164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.417263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.417286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.417459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.417486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.417716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.417738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.417919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.417946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.418053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.418075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.418187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.418207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.418405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.418430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.418612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.418648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.418790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.418815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.418927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.418948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.419054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.419079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.419185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.419213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.419436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.419461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.419570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.419591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.419782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.419807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.419940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.419963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.420232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.420264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.420455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.420489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.420633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.420666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.420865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.420887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.421016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.990 [2024-12-05 14:03:26.421041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.990 qpair failed and we were unable to recover it. 00:31:43.990 [2024-12-05 14:03:26.421221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.421244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.421425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.421449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.421634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.421658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.421801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.421826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.421946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.421969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.422222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.422247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.422378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.422408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.422581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.422605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.422783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.422807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.422915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.422939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.423200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.423223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.423442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.423467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.423568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.423589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.423721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.423745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.423869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.423893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.424012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.424034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.424312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.424336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.424508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.424532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.424639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.424661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.424787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.424810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.425084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.425107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.425285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.425308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.425478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.425503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.425649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.425674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.425777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.425801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.425969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.425992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.426178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.426200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.426456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.426481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.426659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.426682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.426812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.426838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.427053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.427077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.427183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.427205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.427448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.427472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.427662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.427690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.427785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.427811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.428102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.428126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.428316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.428339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.428487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.428514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.428636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.991 [2024-12-05 14:03:26.428659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.991 qpair failed and we were unable to recover it. 00:31:43.991 [2024-12-05 14:03:26.428791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.428816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.428945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.428970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.429211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.429233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.429403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.429427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.429530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.429554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.429685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.429708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.429879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.429902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.430200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.430232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.430430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.430465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.430615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.430648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.430785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.430819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.431076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.431111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.431243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.431267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.431520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.431545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.431671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.431697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.431881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.431903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.432162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.432185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.432404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.432429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.432605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.432628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.432731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.432753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.432854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.432875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.433006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.433036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.433245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.433268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.433507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.433531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.433651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.433672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.433798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.433821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.433952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.433979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.434149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.434172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.434278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.434301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.434496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.434522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.434621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.434644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.434749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.434773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.434935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.434960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.435101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.435126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.992 [2024-12-05 14:03:26.435235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.992 [2024-12-05 14:03:26.435260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.992 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.435363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.435401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.435493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.435517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.435637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.435661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.435803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.435825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.435919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.435941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.436048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.436070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.436185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.436207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.436314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.436338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.436437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.436459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.436557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.436579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.436675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.436698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.436801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.436822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.436922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.436943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.437031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.437055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.437162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.437187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.437285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.437309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.437431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.437453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.437630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.437653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.437757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.437778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.437870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.437891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.438016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.438040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.438154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.438180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.438282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.438303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.438470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.438495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.438601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.438623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.438788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.438812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.438913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.438934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.439033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.439055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.439166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.439189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.439363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.439397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.439495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.439519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.439624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.439646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.439977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.440003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.440144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.440169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.440275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.440296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.440464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.440490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.440588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.440613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.440709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.993 [2024-12-05 14:03:26.440732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.993 qpair failed and we were unable to recover it. 00:31:43.993 [2024-12-05 14:03:26.440896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.440919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.441102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.441125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.441243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.441268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.441364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.441397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.441559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.441582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.441696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.441717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.441827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.441851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.441951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.441973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.442206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.442229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.442327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.442349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.442466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.442490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.442584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.442605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.442705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.442727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.442822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.442846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.442937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.442959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.443063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.443088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.443180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.443203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.443308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.443329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.443448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.443470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.443565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.443587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.443674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.443695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.443880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.443902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.444001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.444023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.444126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.444148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.444236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.444258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.444443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.444467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.444570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.444592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.444683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.444706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.444800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.444823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.444926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.444960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.445085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.445110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.445269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.445292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.445396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.445419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.445507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.445529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.445622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.445645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.445734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.445757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.445869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.445895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.446058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.446083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.994 [2024-12-05 14:03:26.446187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.994 [2024-12-05 14:03:26.446209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.994 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.446330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.446354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.446470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.446495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.446592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.446619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.446733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.446755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.446857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.446879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.447069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.447098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.447198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.447227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.447342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.447376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.447496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.447523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.447615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.447636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.447799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.447822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.447908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.447928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.448042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.448062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.448158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.448179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.448294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.448315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.448435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.448465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.448578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.448599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.448686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.448708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.448822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.448844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.448932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.448956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.449045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.449065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.449221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.449243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.449329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.449351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.449454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.449475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.449571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.449591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.449672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.449693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.449855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.449877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.449970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.449992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.450086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.450108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.450210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.450236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.450342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.450363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.450465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.450487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.450579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.450605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.450687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.450707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.450805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.450827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.451013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.451035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.451189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.451210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.451304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.995 [2024-12-05 14:03:26.451324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.995 qpair failed and we were unable to recover it. 00:31:43.995 [2024-12-05 14:03:26.451431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.451452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.451555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.451576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.451758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.451781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.451940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.451961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.452065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.452085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.452179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.452200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.452375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.452399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.452632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.452653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.452762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.452784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.452899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.452922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.453016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.453037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.453127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.453147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.454420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.454472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.454736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.454759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.455000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.455022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.455122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.455142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.455257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.455278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.455378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.455399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.455494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.455517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.455749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.455771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.455877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.455896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.455990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.456017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.456128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.456151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.456313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.456337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.456449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.456471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.456593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.456614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.456709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.456729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.456891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.456915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.457075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.457098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.457193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.457213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.457309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.457329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.457438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.457460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.457536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.457556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.457655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.457674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.457759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.457779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.457941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.457965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.458141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.458162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.458270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.458291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.996 [2024-12-05 14:03:26.458365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.996 [2024-12-05 14:03:26.458391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.996 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.458600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.458619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.458705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.458725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.458899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.458920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.459075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.459095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.459321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.459344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.459461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.459481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.459587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.459610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.459708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.459728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.459921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.459940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.460044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.460064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.460159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.460184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.460278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.460296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.460456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.460476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.460573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.460593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.460683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.460701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.460786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.460806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.460890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.460909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.461079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.461099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.461267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.461287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.461408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.461427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.461537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.461556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.461645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.461664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.461818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.461839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.461922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.461944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.462044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.462064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.462255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.462275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.462388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.462408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.462496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.462515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.462668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.462689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.462794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.462814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.462995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.463016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.463103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.463121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.463245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.463267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.463378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.463398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.997 qpair failed and we were unable to recover it. 00:31:43.997 [2024-12-05 14:03:26.463489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.997 [2024-12-05 14:03:26.463508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.463598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.463618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.463786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.463806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.463912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.463933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.464024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.464043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.464200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.464228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.464334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.464352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.464467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.464487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.464592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.464610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.464692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.464713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.464868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.464889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.464986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.465004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.465084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.465102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.465189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.465208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.465375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.465394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.465484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.465505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.465616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.465640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.465742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.465761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.465883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.465902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.465985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.466004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.466090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.466111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.466195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.466214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.466298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.466318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.466416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.466437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.466541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.466560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.466638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.466656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.466739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.466759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.466910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.466938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.467020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.467038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.467147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.467165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.467266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.467286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.467385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.467406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.467514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.467534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.467687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.467705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.467896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.467920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.468018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.468037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.468194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.468213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.468363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.468405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.468501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.998 [2024-12-05 14:03:26.468519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.998 qpair failed and we were unable to recover it. 00:31:43.998 [2024-12-05 14:03:26.468620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.468641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.468744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.468763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.468850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.468869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.468952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.468971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.469152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.469171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.469278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.469296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.469451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.469472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.469556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.469574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.469759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.469779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.469866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.469884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.470079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.470099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.470182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.470203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.470324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.470343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.470464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.470485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.470570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.470591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.470773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.470793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.470898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.470917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.471072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.471092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.471207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.471231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.471343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.471375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.471462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.471482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.471650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.471671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.471829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.471852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.471965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.471984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.472094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.472115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.472242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.472261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.472341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.472362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.472466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.472488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.472589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.472608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.472715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.472737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.472957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.472977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.473196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.473217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.473310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.473330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.473451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.473470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.473570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.473588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.473681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.473699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.473785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.473803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.473896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.473916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:43.999 [2024-12-05 14:03:26.474001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.999 [2024-12-05 14:03:26.474019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:43.999 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.474116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.474135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.474305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.474325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.474489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.474509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.474683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.474704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.474925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.474945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.475169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.475189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.475351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.475381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.475658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.475678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.475785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.475805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.475908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.475930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.476182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.476201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.476420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.476442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.476612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.476632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.476808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.476828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.476910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.476928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.477089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.477109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.477281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.477302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.477482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.477503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.477675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.477697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.477899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.477922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.478157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.478192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.478477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.478511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.478642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.478674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.478909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.478932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.479128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.479152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.479402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.479425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.479541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.479564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.479757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.479779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.480020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.480043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.480211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.480233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.480399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.480422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.480667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.480691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.480868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.480890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.481096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.481119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.481219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.481240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.481499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.481525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.481636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.000 [2024-12-05 14:03:26.481660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.000 qpair failed and we were unable to recover it. 00:31:44.000 [2024-12-05 14:03:26.481767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.481789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.481955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.481978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.482118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.482142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.482380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.482405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.482532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.482554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.482718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.482740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.482940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.482964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.483216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.483241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.483399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.483422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.483527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.483551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.483730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.483758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.483967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.484001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.484191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.484222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.484429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.484464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.484603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.484637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.484938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.484971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.485117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.485140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.485243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.485266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.485502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.485527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.485627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.485648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.485761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.485783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.485963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.485985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.486236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.486261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.486437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.486462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.486574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.486597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.486784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.486807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.487059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.487082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.487183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.487205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.487322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.487345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.487492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.487516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.487730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.487752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.487938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.487963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.488197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.488220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.488386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.488410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.488625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.001 [2024-12-05 14:03:26.488650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.001 qpair failed and we were unable to recover it. 00:31:44.001 [2024-12-05 14:03:26.488805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.488828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.489036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.489058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.489162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.489184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.489373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.489404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.489505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.489526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.489642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.489662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.489849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.489874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.490085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.490142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.490242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.490265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.490419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.490443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.490671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.490695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.490923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.490946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.491128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.491150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.491342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.491366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.491558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.491581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.491689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.491709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.491822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.491845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.491955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.491978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.492090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.492114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.492286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.492309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.492488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.492512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.492627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.492650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.492745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.492767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.492879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.492902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.493010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.493032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.493156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.493180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.493351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.493380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.493506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.493528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.493631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.493653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.493821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.493843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.493952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.493975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.494189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.494212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.494406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.494429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.494604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.494626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.494752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.494775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.494942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.494965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.495166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.495189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.495382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.495406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.002 qpair failed and we were unable to recover it. 00:31:44.002 [2024-12-05 14:03:26.496577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.002 [2024-12-05 14:03:26.496623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.496853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.496877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.497093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.497115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.497295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.497320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.497502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.497526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.497724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.497755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.498012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.498036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.498139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.498160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.498339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.498362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.498613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.498636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.498949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.498972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.499230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.499253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.499579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.499614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.499807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.499839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.499987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.500020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.500215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.500240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.500357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.500388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.500519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.500542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.500680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.500703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.500817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.500841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.501009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.501033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.501159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.501184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.501379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.501402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.501585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.501609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.501839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.501864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.502141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.502174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.502354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.502413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.502561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.502594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.502807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.502840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.502994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.503026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.503300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.503333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.503547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.503580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.503784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.503809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.504002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.504027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.504135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.504158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.504387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.504414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.504590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.504613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.504808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.504831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.504958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.003 [2024-12-05 14:03:26.504981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.003 qpair failed and we were unable to recover it. 00:31:44.003 [2024-12-05 14:03:26.505144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.505166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.505351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.505385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.505551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.505572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.505687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.505709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.505946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.505969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.506222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.506245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.506360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.506390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.506505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.506528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.506668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.506695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.506889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.506910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.507205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.507228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.507399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.507423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.507607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.507630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.507859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.507883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.508129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.508151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.508331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.508353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.508609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.508633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.508794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.508816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.508943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.508966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.509148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.509170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.509457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.509481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.509674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.509699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.509935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.509958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.510158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.510192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.510515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.510551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.510753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.510786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.511043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.511066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.511322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.511347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.511640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.511664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.511906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.511931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.512144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.512169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.512349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.512404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.512547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.512571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.512701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.512722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.512902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.512930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.513063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.513086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.513345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.513377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.513507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.004 [2024-12-05 14:03:26.513530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.004 qpair failed and we were unable to recover it. 00:31:44.004 [2024-12-05 14:03:26.513707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.513730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.513985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.514008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.514196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.514219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.514392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.514416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.514653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.514676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.514804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.514829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.515011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.515034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.515236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.515259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.515480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.515504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.515687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.515712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.515842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.515868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.516084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.516106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.516344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.516378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.516554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.516579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.516705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.516728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.516850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.516872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.517095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.517122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.517381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.517405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.517529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.517554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.517692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.517715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.517927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.517950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.518225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.518247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.518412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.518435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.518614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.518637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.518765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.518792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.518884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.518907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.519121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.519143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.519398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.519423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.519606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.519628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.519755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.519780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.519901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.519924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.520200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.520223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.520417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.520443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.520625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.520647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.520764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.520785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.520902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.520926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.521106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.521129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.521315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.521342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.005 [2024-12-05 14:03:26.521546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.005 [2024-12-05 14:03:26.521569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.005 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.521740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.521763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.522043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.522077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.522226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.522260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.522445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.522479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.522681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.522714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.522844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.522867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.522989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.523011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.523187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.523211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.523397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.523422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.523650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.523674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.523797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.523819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.524070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.524093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.524206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.524230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.524357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.524390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.524523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.524546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.524716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.524740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.524921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.524944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.525040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.525061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.525296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.525318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.525496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.525519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.525695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.525717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.525924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.525949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.526047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.526070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.526190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.526213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.526458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.526481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.526596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.526622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.526856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.526879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.527158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.527182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.527343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.527365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.527538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.527560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.006 [2024-12-05 14:03:26.527794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.006 [2024-12-05 14:03:26.527818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.006 qpair failed and we were unable to recover it. 00:31:44.295 [2024-12-05 14:03:26.527932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.295 [2024-12-05 14:03:26.527958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.295 qpair failed and we were unable to recover it. 00:31:44.295 [2024-12-05 14:03:26.528244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.295 [2024-12-05 14:03:26.528266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.295 qpair failed and we were unable to recover it. 00:31:44.295 [2024-12-05 14:03:26.528362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.295 [2024-12-05 14:03:26.528394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.295 qpair failed and we were unable to recover it. 00:31:44.295 [2024-12-05 14:03:26.528584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.295 [2024-12-05 14:03:26.528609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.295 qpair failed and we were unable to recover it. 00:31:44.295 [2024-12-05 14:03:26.528721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.295 [2024-12-05 14:03:26.528744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.295 qpair failed and we were unable to recover it. 00:31:44.295 [2024-12-05 14:03:26.528862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.295 [2024-12-05 14:03:26.528884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.295 qpair failed and we were unable to recover it. 00:31:44.295 [2024-12-05 14:03:26.529190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.295 [2024-12-05 14:03:26.529213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.295 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.529416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.529443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.529559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.529583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.529704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.529729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.529894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.529919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.530077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.530099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.530328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.530350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.530642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.530665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.530917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.530942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.531096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.531120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.531331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.531355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.531544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.531567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.531699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.531721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.531854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.531876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.531990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.532014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.532292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.532315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.532649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.532674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.532779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.532801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.532924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.532950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.533199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.533223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.533399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.533423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.533679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.533701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.533823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.533845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.533967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.533989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.534251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.534274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.534552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.534578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.534675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.534698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.534830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.534852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.535080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.535103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.535334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.535362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.535569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.535591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.535754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.535777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.535879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.535900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.536012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.536034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.536260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.536282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.536520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.536545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.536737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.536769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.536952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.536985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.296 [2024-12-05 14:03:26.537238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.296 [2024-12-05 14:03:26.537272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.296 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.537498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.537532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.537689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.537723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.537857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.537889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.538187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.538210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.538497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.538522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.538705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.538729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.538931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.538954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.539137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.539160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.539336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.539359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.539511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.539536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.539729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.539752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.540054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.540077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.540259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.540282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.540550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.540574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.540744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.540766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.540950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.540973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.541181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.541205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.541329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.541356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.541588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.541611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.541749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.541771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.541962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.541987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.542153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.542198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.542425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.542460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.542704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.542739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.542872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.542905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.543163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.543195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.543396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.543419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.543609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.543633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.543830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.543852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.544076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.544109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.544363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.544408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.544538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.544571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.544691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.544723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.544875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.544910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.545131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.545163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.545380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.545415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.545616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.545650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.545925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.297 [2024-12-05 14:03:26.545958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.297 qpair failed and we were unable to recover it. 00:31:44.297 [2024-12-05 14:03:26.546244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.546283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.546426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.546449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.546584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.546607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.546710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.546731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.546916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.546941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.547217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.547239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.547398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.547423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.547706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.547728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.547902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.547924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.548174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.548197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.548430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.548453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.548659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.548681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.548808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.548831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.549021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.549043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.549295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.549328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.549516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.549551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.549761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.549793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.549936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.549968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.550091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.550124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.550409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.550445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.550603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.550643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.550777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.550812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.551043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.551067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.551326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.551349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.551599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.551624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.551795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.551820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.551930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.551953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.552057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.552081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.552260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.552283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.552533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.552558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.552668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.552690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.552790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.552815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.552952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.552977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.553155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.553178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.553278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.553299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.553489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.553513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.553681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.553703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.553837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.553859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.298 qpair failed and we were unable to recover it. 00:31:44.298 [2024-12-05 14:03:26.553976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.298 [2024-12-05 14:03:26.554000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.554205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.554229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.554361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.554396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.554520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.554543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.554791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.554813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.554990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.555013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.555203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.555227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.555407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.555430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.555550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.555572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.555803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.555831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.556002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.556024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.556206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.556251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.556536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.556570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.556723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.556758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.556887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.556922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.557068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.557103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.557297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.557329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.557470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.557504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.557638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.557660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.557870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.557894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.558000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.558022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.558125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.558149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.558315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.558337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.558536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.558562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.558687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.558711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.558874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.558898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.559001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.559023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.559202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.559281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.559456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.559496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.559644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.559677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.559828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.559862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.560067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.560100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.560219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.560251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.560488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.560523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.560720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.560755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.560868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.560900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.561036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.561070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.561272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.299 [2024-12-05 14:03:26.561306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.299 qpair failed and we were unable to recover it. 00:31:44.299 [2024-12-05 14:03:26.561451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.561477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.561590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.561612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.561717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.561740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.561914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.561938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.562064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.562088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.562275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.562297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.562414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.562437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.562535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.562560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.562743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.562768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.562950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.562972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.563151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.563173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.563349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.563377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.563504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.563543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.563687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.563721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.563836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.563871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.564020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.564053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.564238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.564270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.564414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.564450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.564597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.564630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.564737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.564770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.564907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.564940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.565052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.565087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.565209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.565241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.565424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.565459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.565593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.565626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.565740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.565778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.565899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.565933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.566054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.566088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.566342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.566384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.566521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.566553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.566698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.566731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.566918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.566950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.567066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.300 [2024-12-05 14:03:26.567094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.300 qpair failed and we were unable to recover it. 00:31:44.300 [2024-12-05 14:03:26.567189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.567212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.567316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.567339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.567466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.567494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.567587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.567610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.567706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.567729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.567833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.567855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.567954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.567976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.568088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.568113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.568227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.568253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.568349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.568381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.568559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.568582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.568705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.568728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.568903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.568928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.569040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.569063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.569292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.569314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.569404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.569428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.569536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.569559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.569746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.569779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.569896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.569928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.570036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.570070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.570200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.570233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.570509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.570533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.570644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.570666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.570846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.570869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.570960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.570984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.571141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.571164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.571344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.571374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.571488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.571510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.571620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.571645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.571748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.571771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.571949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.571972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.572059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.572081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.301 qpair failed and we were unable to recover it. 00:31:44.301 [2024-12-05 14:03:26.572186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.301 [2024-12-05 14:03:26.572208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.572303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.572325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.572438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.572461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.572553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.572575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.572664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.572686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.572779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.572803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.572975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.572997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.573196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.573221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.573317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.573339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.573439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.573464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.573627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.573649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.573809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.573836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.573941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.573962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.574052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.574075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.574268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.574292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.574460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.574486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.574592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.574614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.574725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.574745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.574856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.574878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.574973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.574995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.575093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.575123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.575286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.575308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.575429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.575453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.575559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.575581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.575685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.575709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.575806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.575828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.576084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.576106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.576267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.576289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.576399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.576429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.576659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.576682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.576777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.576801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.576964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.576987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.577084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.577107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.577196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.577218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.577395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.577419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.577512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.577534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.577626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.577648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.577751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.577772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.577861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.302 [2024-12-05 14:03:26.577885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.302 qpair failed and we were unable to recover it. 00:31:44.302 [2024-12-05 14:03:26.578068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.578090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.578222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.578244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.578337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.578360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.578490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.578515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.578697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.578722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.578887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.578911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.579077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.579101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.579206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.579228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.579326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.579349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.579462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.579487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.579748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.579770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.579879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.579901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.579997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.580020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.580120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.580143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.580321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.580343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.580580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.580604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.580693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.580716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.580823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.580845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.580949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.580972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.581082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.581104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.581205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.581230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.581342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.581377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.581559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.581583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.581675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.581698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.581797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.581819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.581924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.581947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.582125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.582148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.582306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.582329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.582432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.582455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.582635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.582658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.582818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.582847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.582983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.583005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.583185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.583208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.583314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.583336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.583442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.583466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.583563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.583585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.583675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.583699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.583794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.583816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.303 qpair failed and we were unable to recover it. 00:31:44.303 [2024-12-05 14:03:26.583910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.303 [2024-12-05 14:03:26.583936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.584049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.584073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.584168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.584191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.584278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.584301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.584551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.584576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.584675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.584697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.584795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.584819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.584979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.585003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.585090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.585112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.585203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.585225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.585340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.585363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.585555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.585579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.585673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.585696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.585815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.585837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.585941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.585963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.586070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.586092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.586200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.586223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.586398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.586422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.586521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.586543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.586729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.586755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.586844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.586866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.586960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.586982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.587242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.587264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.587426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.587450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.587632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.587655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.587955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.587989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.588261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.588295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.588514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.588548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.588702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.588736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.588879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.588922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.589174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.589197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.589435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.589458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.589629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.589651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.589760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.589782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.589946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.589969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.590249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.590274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.590557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.590581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.590757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.590780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.590911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.304 [2024-12-05 14:03:26.590934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.304 qpair failed and we were unable to recover it. 00:31:44.304 [2024-12-05 14:03:26.591190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.591232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.591385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.591422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.591704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.591738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.591935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.591969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.592207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.592240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.592425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.592448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.592625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.592666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.592914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.592947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.593245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.593280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.593476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.593513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.593653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.593686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.593801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.593833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.594037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.594072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.594290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.594312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.594484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.594508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.594613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.594636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.594762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.594784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.594941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.594965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.595169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.595192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.595375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.595400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.595604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.595626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.595791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.595820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.595922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.595944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.596137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.596159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.596356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.596399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.596517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.596550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.596773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.596813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.597026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.597061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.597330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.597354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.597524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.597547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.597661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.597684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.597885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.597908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.598098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.598120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.598297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.598319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.305 qpair failed and we were unable to recover it. 00:31:44.305 [2024-12-05 14:03:26.598523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.305 [2024-12-05 14:03:26.598559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.598834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.598868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.599093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.599118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.599305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.599338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.599477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.599512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.599655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.599687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.599887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.599920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.600162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.600196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.600511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.600545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.600730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.600764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.601032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.601065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.601314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.601339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.601451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.601472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.601604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.601628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.601804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.601832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.601949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.601973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.602219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.602243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.602436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.602460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.602632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.602677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.602879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.602914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.603176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.603219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.603483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.603507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.603686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.603709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.603945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.603969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.604206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.604229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.604461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.604487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.604715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.604739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.604861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.604884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.605016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.605039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.605213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.605235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.605396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.605420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.605541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.605564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.605686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.605709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.605955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.605977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.606187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.606209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.606481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.606504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.606614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.606636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.606897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.606919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.306 [2024-12-05 14:03:26.607009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.306 [2024-12-05 14:03:26.607029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.306 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.607162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.607184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.607357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.607410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.607614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.607646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.607769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.607802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.608017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.608050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.608303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.608335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.608557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.608591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.608840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.608872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.609014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.609035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.609267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.609289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.609401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.609425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.609597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.609619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.609816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.609839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.610020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.610043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.610236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.610268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.610485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.610519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.610630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.610668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.610808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.610841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.610965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.610997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.611192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.611224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.611378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.611412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.611538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.611570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.611711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.611744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.611859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.611892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.612018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.612050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.612252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.612284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.612486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.612520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.612644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.612676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.612819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.612851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.613062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.613095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.613304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.613326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.613513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.613536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.613699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.613722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.613845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.613868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.613972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.613994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.614197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.614221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.614457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.614480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.614594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.307 [2024-12-05 14:03:26.614617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.307 qpair failed and we were unable to recover it. 00:31:44.307 [2024-12-05 14:03:26.614805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.614827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.615011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.615033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.615223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.615245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.615504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.615538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.615680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.615713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.615953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.615991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.616193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.616226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.616442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.616465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.616698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.616731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.616874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.616907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.617163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.617205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.617461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.617484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.617661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.617684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.617914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.617946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.618204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.618236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.618488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.618512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.618677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.618699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.618888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.618910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.619097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.619142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.619434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.619468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.619604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.619637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.619823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.619855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.620067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.620100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.620397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.620431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.620705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.620727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.620858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.620880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.621141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.621163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.621392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.621426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.621593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.621624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.621839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.621872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.622187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.622220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.622537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.622561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.622690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.622712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.622850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.622872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.623039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.623061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.623177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.623199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.623437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.623461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.623690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.623712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.308 [2024-12-05 14:03:26.623833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.308 [2024-12-05 14:03:26.623855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.308 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.624131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.624154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.624357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.624387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.624629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.624651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.624769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.624791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.624988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.625010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.625210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.625242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.625456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.625490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.625743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.625781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.626032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.626064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.626346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.626384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.626568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.626590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.626860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.626882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.626998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.627020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.627336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.627358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.627529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.627551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.627783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.627816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.628042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.628075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.628258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.628291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.628477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.628510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.628712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.628743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.628947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.628980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.629276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.629298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.629562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.629585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.629776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.629798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.630043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.630066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.630224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.630247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.630458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.630492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.630706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.630738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.630876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.630907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.631145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.631177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.631451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.631474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.631704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.631727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.631897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.631919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.632099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.632121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.632241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.632263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.632442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.632466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.632677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.632699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.632812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.309 [2024-12-05 14:03:26.632834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.309 qpair failed and we were unable to recover it. 00:31:44.309 [2024-12-05 14:03:26.633003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.633045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.633268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.633300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.633605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.633648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.633762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.633784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.633876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.633896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.634185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.634208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.634466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.634489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.634662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.634685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.634846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.634867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.635085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.635118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.635328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.635361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.635583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.635606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.635788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.635810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.635970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.635992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.636092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.636114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.636458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.636537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.636783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.636819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.636973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.637006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.637212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.637245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.637498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.637534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.637831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.637863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.638071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.638104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.638296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.638328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.638617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.638652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.638883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.638916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.639195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.639227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.639501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.639535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.639692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.639724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.639924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.639956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.640263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.640295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.640504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.640538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.640678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.640711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.640917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.640949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.641139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.310 [2024-12-05 14:03:26.641172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.310 qpair failed and we were unable to recover it. 00:31:44.310 [2024-12-05 14:03:26.641453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.641486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.641783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.641814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.642069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.642102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.642399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.642433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.642587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.642619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.642766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.642799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.643096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.643128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.643280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.643312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.643528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.643562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.643779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.643811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.644009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.644041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.644178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.644211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.644408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.644441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.644643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.644675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.644867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.644901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.645189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.645221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.645477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.645517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.645709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.645741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.645945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.645977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.646251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.646283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.646577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.646610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.646754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.646788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.647088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.647120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.647391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.647425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.647680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.647712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.647859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.647892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.648149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.648182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.648335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.648379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.648532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.648566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.648685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.648717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.648956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.648988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.649243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.649275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.649556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.649591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.649737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.649770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.650022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.650054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.650249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.650282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.650423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.650456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.650660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.650692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.311 qpair failed and we were unable to recover it. 00:31:44.311 [2024-12-05 14:03:26.650933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.311 [2024-12-05 14:03:26.650964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.651165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.651198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.651413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.651446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.651643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.651675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.651813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.651844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.652161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.652193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.652449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.652483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.652607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.652639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.652887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.652919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.653212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.653245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.653528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.653562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.653776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.653808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.654082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.654115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.654390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.654424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.654563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.654596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.654706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.654737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.654991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.655024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.655274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.655306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.655527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.655567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.655763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.655795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.656009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.656042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.656295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.656328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.656529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.656563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.656865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.656898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.657084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.657115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.657303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.657336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.657560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.657594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.657781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.657813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.658087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.658119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.658392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.658426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.658630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.658663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.658916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.658949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.659240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.659271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.659479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.659513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.659703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.659736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.659985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.660017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.660317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.660351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.660504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.312 [2024-12-05 14:03:26.660537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.312 qpair failed and we were unable to recover it. 00:31:44.312 [2024-12-05 14:03:26.660765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.660797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.661077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.661110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.661334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.661378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.661671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.661704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.661896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.661928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.662207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.662240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.662509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.662544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.662768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.662800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.662927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.662959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.663224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.663256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.663485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.663519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.663641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.663673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.663882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.663913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.664108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.664142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.664425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.664458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.664646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.664679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.664943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.664975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.665196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.665229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.665502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.665536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.665815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.665847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.666157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.666195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.666386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.666419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.666627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.666659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.666912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.666946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.667195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.667228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.667424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.667458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.667664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.667697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.667908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.667941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.668240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.668272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.668542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.668575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.668776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.668808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.669032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.669064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.669268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.669300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.669553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.669586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.669895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.669927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.670124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.670156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.670350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.670400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.670606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.313 [2024-12-05 14:03:26.670638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.313 qpair failed and we were unable to recover it. 00:31:44.313 [2024-12-05 14:03:26.670839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.670871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.671195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.671227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.671366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.671407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.671660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.671692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.671988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.672020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.672222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.672254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.672491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.672525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.672671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.672704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.672970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.673002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.673200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.673232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.673499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.673532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.673678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.673711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.673962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.673994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.674278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.674310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.674595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.674628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.674885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.674917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.675229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.675261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.675504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.675537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.675685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.675717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.675909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.675942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.676125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.676157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.676387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.676421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.676678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.676716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.676936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.676968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.677240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.677273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.677482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.677515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.677766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.677797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.677942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.677974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.678251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.678283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.678491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.678524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.678722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.678755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.678952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.678984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.679259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.679291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.679434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.679467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.679625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.679657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.679945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.314 [2024-12-05 14:03:26.679976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.314 qpair failed and we were unable to recover it. 00:31:44.314 [2024-12-05 14:03:26.680177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.680209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.680470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.680503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.680655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.680688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.680888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.680921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.681213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.681245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.681465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.681498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.681696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.681729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.681929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.681960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.682153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.682185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.682409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.682444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.682664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.682697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.682895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.682928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.683179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.683210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.683436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.683471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.683774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.683807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.684020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.684052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.684327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.684359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.684552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.684584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.684861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.684894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.685198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.685231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.685479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.685512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.685719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.685752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.686025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.686057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.686260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.686292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.686599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.686631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.686839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.686872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.686989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.687032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.687310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.687341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.687615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.687648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.687801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.687834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.688138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.688169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.688393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.688427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.688629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.688662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.688802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.688835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.689136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.689169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.689378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.689412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.315 [2024-12-05 14:03:26.689608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.315 [2024-12-05 14:03:26.689640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.315 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.689764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.689797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.690085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.690117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.690382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.690416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.690626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.690659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.690813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.690845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.691052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.691084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.691394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.691429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.691615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.691648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.691852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.691883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.692013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.692046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.692248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.692280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.692532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.692565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.692760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.692792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.692902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.692933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.693211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.693242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.693444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.693477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.693725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.693758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.693937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.693968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.694153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.694185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.694486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.694519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.694787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.694819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.695010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.695043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.695186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.695218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.695406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.695439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.695623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.695656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.695868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.695901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.696217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.696250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.696534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.696568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.696817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.696850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.697111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.697150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.697361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.697405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.697684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.697716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.697843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.697876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.698149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.698181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.698380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.698414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.698676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.698709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.699025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.699057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.316 [2024-12-05 14:03:26.699308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.316 [2024-12-05 14:03:26.699340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.316 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.699656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.699690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.699825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.699857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.700120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.700152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.700351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.700396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.700653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.700686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.700841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.700873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.700997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.701029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.701303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.701335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.701540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.701574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.701725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.701758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.702022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.702055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.702315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.702347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.702612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.702646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.702827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.702861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.703159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.703191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.703395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.703429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.703645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.703678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.703952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.703984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.704182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.704215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.704526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.704560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.704741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.704774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.705075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.705108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.705245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.705277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.705480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.705514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.705713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.705745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.706019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.706051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.706341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.706382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.706648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.706682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.706933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.706965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.707146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.707178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.707381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.707414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.707682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.707720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.707927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.707959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.708141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.708174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.708396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.708430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.708707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.708739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.709024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.709056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.317 [2024-12-05 14:03:26.709277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.317 [2024-12-05 14:03:26.709309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.317 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.709512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.709546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.709774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.709806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.709929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.709961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.710158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.710191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.710463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.710496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.710757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.710790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.711063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.711097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.711321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.711353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.711613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.711645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.711937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.711970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.712243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.712275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.712491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.712525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.712721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.712754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.713031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.713062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.713258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.713291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.713473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.713506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.713703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.713735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.713914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.713947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.714221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.714253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.714460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.714493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.714689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.714721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.714914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.714945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.715218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.715250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.715392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.715425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.715623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.715655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.715929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.715961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.716248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.716281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.716497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.716531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.716661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.716692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.716887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.716920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.717101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.717132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.717351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.717402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.717601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.717633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.717836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.717875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.718067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.718100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.718302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.718334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.718539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.718572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.318 [2024-12-05 14:03:26.718824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.318 [2024-12-05 14:03:26.718856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.318 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.719106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.719139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.719399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.719432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.719642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.719674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.719877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.719909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.720031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.720063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.720257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.720289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.720542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.720576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.720696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.720728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.720928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.720961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.721188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.721220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.721522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.721556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.721818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.721851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.721980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.722012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.722144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.722176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.722458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.722491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.722676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.722710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.722855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.722887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.723025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.723057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.723335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.723375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.723607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.723640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.723840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.723873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.724075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.724106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.724363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.724408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.724596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.724629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.724882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.724915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.725167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.725199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.725405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.725439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.725710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.725742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.725940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.725973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.726237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.726269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.726482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.726515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.726708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.726740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.727019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.727051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.319 [2024-12-05 14:03:26.727298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.319 [2024-12-05 14:03:26.727330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.319 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.727520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.727554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.727837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.727876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.728132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.728164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.728357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.728400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.728701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.728734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.728920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.728953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.729230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.729263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.729546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.729579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.729854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.729888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.730086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.730117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.730314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.730345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.730625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.730658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.730942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.730975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.731253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.731286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.731573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.731607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.731738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.731771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.732066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.732098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.732315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.732347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.732623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.732656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.732930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.732961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.733100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.733132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.733411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.733445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.733747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.733780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.733987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.734019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.734208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.734239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.734491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.734525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.734774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.734807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.735007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.735040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.735270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.735302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.735553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.735586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.735878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.735910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.736202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.736234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.736516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.736549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.736698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.736730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.737026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.737057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.737279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.737312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.737570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.737603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.320 [2024-12-05 14:03:26.737904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.320 [2024-12-05 14:03:26.737936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.320 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.738230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.738263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.738483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.738517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.738716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.738748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.738871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.738914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.739029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.739062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.739313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.739345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.739634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.739667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.739948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.739981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.740193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.740225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.740476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.740509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.740783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.740816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.741041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.741073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.741278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.741310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.741582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.741616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.741736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.741769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.741989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.742022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.742213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.742245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.742505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.742539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.742815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.742847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.743126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.743158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.743347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.743395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.743590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.743622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.743817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.743849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.744128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.744159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.744363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.744406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.744610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.744643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.744920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.744952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.745261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.745292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.745555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.745588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.745789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.745821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.746018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.746051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.746324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.746356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.746561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.746595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.746793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.746825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.747098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.747130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.747406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.747440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.747703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.747735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.747960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.321 [2024-12-05 14:03:26.747992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.321 qpair failed and we were unable to recover it. 00:31:44.321 [2024-12-05 14:03:26.748169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.748201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.748453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.748486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.748781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.748813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.749103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.749135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.749413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.749446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.749737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.749770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.750037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.750069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.750274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.750306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.750496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.750531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.750807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.750839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.751091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.751123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.751313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.751345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.751637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.751670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.751944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.751976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.752261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.752294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.752479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.752512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.752781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.752813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.752952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.752984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.753123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.753155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.753438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.753471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.753723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.753756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.754018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.754050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.754301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.754334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.754650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.754684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.754876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.754908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.755089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.755121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.755329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.755360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.755566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.755599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.755850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.755883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.756063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.756095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.756388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.756422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.756691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.756723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.756913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.756951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.757204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.757236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.757415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.757448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.757690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.757722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.758027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.758059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.322 [2024-12-05 14:03:26.758356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.322 [2024-12-05 14:03:26.758399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.322 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.758690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.758722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.758990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.759021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.759227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.759258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.759440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.759473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.759745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.759778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.760057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.760089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.760352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.760393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.760612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.760644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.760844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.760876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.761019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.761051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.761253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.761285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.761588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.761621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.761814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.761847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.762108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.762140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.762435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.762468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.762739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.762772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.763041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.763072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.763254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.763285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.763464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.763497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.763748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.763780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.763979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.764011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.764294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.764326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.764523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.764557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.764830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.764862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.765079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.765112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.765363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.765408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.765706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.765739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.766006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.766038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.766311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.766343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.766637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.766671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.766880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.766913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.767168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.767199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.767390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.767424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.767731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.767764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.767986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.768024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.768203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.768235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.768425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.768459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.323 [2024-12-05 14:03:26.768679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.323 [2024-12-05 14:03:26.768711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.323 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.768890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.768922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.769195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.769228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.769422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.769454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.769720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.769753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.769933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.769965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.770186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.770219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.770420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.770453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.770635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.770668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.770917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.770949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.771247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.771279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.771550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.771584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.771866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.771899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.772098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.772130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.772400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.772434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.772689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.772721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.773017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.773049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.773290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.773322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.773637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.773670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.773855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.773887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.774142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.774174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.774426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.774460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.774609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.774641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.774866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.774898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.775188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.775220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.775400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.775434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.775613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.775647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.775850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.775882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.776182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.776214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.776483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.776517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.776749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.776781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.777083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.777116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.777411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.324 [2024-12-05 14:03:26.777445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.324 qpair failed and we were unable to recover it. 00:31:44.324 [2024-12-05 14:03:26.777695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.777727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.777919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.777952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.778204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.778236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.778425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.778458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.778584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.778621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.778896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.778929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.779225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.779257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.779475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.779509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.779725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.779758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.780033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.780065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.780263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.780295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.780556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.780590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.780788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.780820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.781036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.781069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.781321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.781353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.781637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.781669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.781863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.781896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.782158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.782190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.782395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.782428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.782550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.782582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.782806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.782838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.783063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.783095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.783405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.783438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.783698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.783731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.784005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.784037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.784287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.784320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.784639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.784672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.784952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.784984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.785280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.785312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.785587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.785620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.785895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.785927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.786115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.786148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.325 [2024-12-05 14:03:26.786342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.325 [2024-12-05 14:03:26.786385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.325 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.786611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.786643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.786766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.786798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.787010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.787042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.787230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.787262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.787517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.787551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.787734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.787766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.787967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.788000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.788211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.788243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.788516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.788550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.788813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.788846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.789047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.789079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.789352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.789399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.789678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.789711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.789908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.789940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.790144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.790177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.790427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.790461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.790663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.790696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.790975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.791007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.791288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.791321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.791548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.791582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.791781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.791813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.792064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.792096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.792383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.792418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.792716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.792748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.792941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.792973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.793177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.793208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.793460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.793494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.793674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.793706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.793826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.793857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.794107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.794139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.794410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.794442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.794723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.794755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.795020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.795053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.795350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.795403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.795636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.795668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.795876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.795908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.796034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.796070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.326 qpair failed and we were unable to recover it. 00:31:44.326 [2024-12-05 14:03:26.796319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.326 [2024-12-05 14:03:26.796352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.796676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.796711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.796910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.796943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.797169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.797201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.797422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.797456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.797679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.797712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.798012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.798044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.798315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.798347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.798646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.798685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.798943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.798977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.799279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.799311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.799617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.799652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.799936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.799968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.800250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.800284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.800549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.800590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.800879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.800912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.801172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.801204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.801391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.801425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.801681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.801717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.801942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.801974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.802161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.802193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.802493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.802528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.802662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.802695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.802847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.802880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.803073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.803107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.803361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.803416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.803698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.803730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.804012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.804046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.804275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.804309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.804531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.804565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.804767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.804800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.805052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.805085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.805354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.805399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.805683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.805717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.806005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.806038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.806230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.806266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.806468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.327 [2024-12-05 14:03:26.806502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.327 qpair failed and we were unable to recover it. 00:31:44.327 [2024-12-05 14:03:26.806697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.806732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.806916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.806949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.807090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.807123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.807421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.807455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.807667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.807701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.807893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.807926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.808132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.808165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.808324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.808357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.808564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.808597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.808784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.808817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.809026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.809060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.809245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.809278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.809466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.809501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.809780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.809814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.810016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.810049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.810251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.810284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.810566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.810601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.810806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.810847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.811032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.811065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.811265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.811298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.811554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.811588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.811812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.811848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.812047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.812081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.812387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.812421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.812641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.812674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.812883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.812916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.813219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.813255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.813393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.813428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.813565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.813598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.813804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.813837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.814067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.814102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.814393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.814430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.814635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.814668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.328 qpair failed and we were unable to recover it. 00:31:44.328 [2024-12-05 14:03:26.814870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.328 [2024-12-05 14:03:26.814903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.815099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.815133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.815406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.815442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.815639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.815673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.815802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.815835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.815961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.815995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.816249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.816282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.816587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.816621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.816836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.816869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.817071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.817104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.817287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.817320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.817613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.817648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.817930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.817964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.818149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.818181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.818444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.818481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.818598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.818633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.818946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.818979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.819161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.819195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.819408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.819442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.819717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.819753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.820021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.820055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.820166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.820199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.820478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.820513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.820774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.820808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.821025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.821064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.821343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.821388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.821503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.821537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.821717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.821753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.821972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.822005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.822206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.822241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.822431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.822465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.822593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.822626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.822816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.822850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.823114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.823147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.823261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.823296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.823575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.823609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.823859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.823895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.824190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.824224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.329 [2024-12-05 14:03:26.824360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.329 [2024-12-05 14:03:26.824405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.329 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.824684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.824718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.825018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.825051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.825319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.825352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.825573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.825608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.825836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.825869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.826001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.826035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.826331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.826364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.826655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.826690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.826962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.826996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.827207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.827241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.827434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.827468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.827746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.827781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.827985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.828019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.828217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.828252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.828448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.828483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.828738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.828771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.829074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.829108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.829401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.829437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.829734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.829771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.830051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.830084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.830365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.830410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.830613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.830647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.830954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.830986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.831241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.831274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.831406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.831441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.831640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.831680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.831959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.831993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.832288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.832323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.832636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.832672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.832856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.832889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.833082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.833115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.833419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.833454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.833700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.833734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.833931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.833965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.834165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.834198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.834390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.834426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.834689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.834722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.330 qpair failed and we were unable to recover it. 00:31:44.330 [2024-12-05 14:03:26.834977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.330 [2024-12-05 14:03:26.835011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.835312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.835345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.835646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.835680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.835829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.835863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.836092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.836127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.836309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.836342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.836550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.836584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.836868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.836900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.837086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.837121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.837393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.837430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.837651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.837685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.837936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.837968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.838160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.838196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.838447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.838480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.838664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.838699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.838907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.838942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.839149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.839184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.839398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.839435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.839638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.839671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.839862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.839896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.840188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.840220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.840446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.840480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.840752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.840786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.841053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.841089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.841295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.841330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.841544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.841578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.841855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.841888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.842188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.842219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.842463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.842506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.842769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.842803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.843059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.843093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.843344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.843390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.843537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.843571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.843770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.843803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.844065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.844099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.844301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.844339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.844549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.844581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.331 qpair failed and we were unable to recover it. 00:31:44.331 [2024-12-05 14:03:26.844859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.331 [2024-12-05 14:03:26.844893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.845177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.845210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.845409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.845442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.845631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.845664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.845855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.845888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.846124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.846156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.846355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.846402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.846537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.846570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.846868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.846902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.847124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.847163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.847436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.847471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.847751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.847786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.848068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.848102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.848300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.848335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.848551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.848588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.848844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.848877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.849166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.849200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.849486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.849520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.849753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.849786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.849927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.849960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.850231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.850264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.850520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.850554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.850861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.850894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.851039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.851072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.851201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.851234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.851506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.851541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.851807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.851840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.332 [2024-12-05 14:03:26.852094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.332 [2024-12-05 14:03:26.852127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.332 qpair failed and we were unable to recover it. 00:31:44.623 [2024-12-05 14:03:26.852333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.623 [2024-12-05 14:03:26.852377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.623 qpair failed and we were unable to recover it. 00:31:44.623 [2024-12-05 14:03:26.852582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.623 [2024-12-05 14:03:26.852616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.623 qpair failed and we were unable to recover it. 00:31:44.623 [2024-12-05 14:03:26.852893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.623 [2024-12-05 14:03:26.852930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.623 qpair failed and we were unable to recover it. 00:31:44.623 [2024-12-05 14:03:26.853212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.623 [2024-12-05 14:03:26.853251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.623 qpair failed and we were unable to recover it. 00:31:44.623 [2024-12-05 14:03:26.853416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.623 [2024-12-05 14:03:26.853453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.623 qpair failed and we were unable to recover it. 00:31:44.623 [2024-12-05 14:03:26.853658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.623 [2024-12-05 14:03:26.853691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.623 qpair failed and we were unable to recover it. 00:31:44.623 [2024-12-05 14:03:26.853976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.854010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.854146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.854179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.854447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.854482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.854763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.854796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.854918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.854951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.855205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.855238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.855492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.855529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.855806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.855840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.856093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.856127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.856391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.856425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.856571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.856603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.856815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.856848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.857105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.857140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.857433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.857469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.857686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.857718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.857850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.857885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.858141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.858173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.858393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.858428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.858625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.858661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.858785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.858818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.859004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.859037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.859388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.859424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.859719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.859752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.859995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.860028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.860307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.860341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.860631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.860665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.860780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.860812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.861004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.861037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.624 [2024-12-05 14:03:26.861315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.624 [2024-12-05 14:03:26.861348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.624 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.861558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.861592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.861877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.861910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.862158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.862190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.862391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.862426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.862623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.862657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.862928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.862963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.863171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.863204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.863392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.863428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.863705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.863745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.863928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.863961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.864163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.864196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.864485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.864521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.864728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.864761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.865015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.865048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.865270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.865303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.865503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.865538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.865793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.865826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.866112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.866145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.866360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.866407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.866522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.866555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.866830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.866866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.867130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.867162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.867361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.867407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.867660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.867693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.867897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.867930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.868067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.868101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.868390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.868424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.625 [2024-12-05 14:03:26.868653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.625 [2024-12-05 14:03:26.868687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.625 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.869000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.869032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.869326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.869361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.869579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.869613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.869867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.869899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.870205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.870238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.870487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.870522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.870744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.870777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.870971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.871004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.871254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.871288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.871413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.871447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.871698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.871733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.871919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.871954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.872235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.872268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.872500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.872534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.872809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.872843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.873157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.873191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.873446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.873480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.873602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.873635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.873905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.873938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.874216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.874250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.874384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.874425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.874732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.874765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.875020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.875055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.875346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.875393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.875654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.875688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.875978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.876011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.876203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.876235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.626 qpair failed and we were unable to recover it. 00:31:44.626 [2024-12-05 14:03:26.876389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.626 [2024-12-05 14:03:26.876423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.876705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.876740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.876862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.876896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.877034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.877066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.877243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.877277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.877580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.877614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.877798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.877832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.878113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.878148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.878347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.878391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.878590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.878626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.878823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.878857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.879073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.879109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.879304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.879339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.879494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.879529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.879641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.879675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.879796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.879829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.880013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.880045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.880346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.880391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.880621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.880656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.880872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.880907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.881093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.881132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.881344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.881393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.881619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.881652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.881867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.881902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.882051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.882085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.882354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.882401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.882589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.882622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.882879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.882913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.883138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.883173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.627 qpair failed and we were unable to recover it. 00:31:44.627 [2024-12-05 14:03:26.883427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.627 [2024-12-05 14:03:26.883462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.883682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.883717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.883939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.883971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.884172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.884207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.884467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.884501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.884692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.884726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.884928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.884961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.885155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.885188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.885409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.885445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.885592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.885627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.885820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.885852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.886055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.886089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.886234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.886270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.886455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.886489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.886620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.886653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.886848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.886883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.887026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.887060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.887320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.887353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.887599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.887635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.887858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.887892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.888145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.888180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.888502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.888539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.888764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.888799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.888936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.888969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.889159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.889192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.889341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.889386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.889644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.889680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.889935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.628 [2024-12-05 14:03:26.889970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.628 qpair failed and we were unable to recover it. 00:31:44.628 [2024-12-05 14:03:26.890273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.890306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.890530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.890565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.890744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.890776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.890989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.891028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.891233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.891267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.891525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.891559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.891741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.891775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.891993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.892027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.892216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.892249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.892537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.892573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.892774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.892807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.893083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.893117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.893410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.893445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.893713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.893749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.893933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.893966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.894247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.894282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.894418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.894457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.894674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.894708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.894958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.894991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.895285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.895318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.895592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.895628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.895881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.895914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.896133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.896164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.629 qpair failed and we were unable to recover it. 00:31:44.629 [2024-12-05 14:03:26.896309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.629 [2024-12-05 14:03:26.896341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.896609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.896645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.896826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.896858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.897051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.897085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.897346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.897392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.897700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.897732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.897945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.897977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.898243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.898278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.898484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.898519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.898741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.898776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.898957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.898989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.899197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.899230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.899450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.899484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.899623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.899656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.899879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.899914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.900164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.900198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.900500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.900533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.900842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.900876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.901154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.901187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.901306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.901338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.901621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.901664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.901971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.902006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.902210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.902244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.902391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.902425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.902711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.902746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.902974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.903006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.903258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.903293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.903429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.903466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.903688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.903720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.903849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.630 [2024-12-05 14:03:26.903882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.630 qpair failed and we were unable to recover it. 00:31:44.630 [2024-12-05 14:03:26.904162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.904197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.904454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.904488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.904691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.904724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.904952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.904986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.905115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.905149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.905360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.905419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.905720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.905755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.905873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.905907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.906160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.906195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.906392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.906426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.906702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.906735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.906934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.906969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.907183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.907218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.907469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.907505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.907687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.907720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.908003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.908036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.908299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.908332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.908535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.908569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.908836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.908871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.909171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.909204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.909467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.909503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.909652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.909687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.909983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.910016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.910307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.910340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.910601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.910636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.910963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.910995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.911286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.911319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.911604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.911638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.631 [2024-12-05 14:03:26.911921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.631 [2024-12-05 14:03:26.911954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.631 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.912153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.912186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.912415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.912458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.912767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.912800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.912927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.912961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.913210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.913243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.913450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.913484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.913761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.913794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.914062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.914095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.914314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.914347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.914610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.914642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.914913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.914946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.915131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.915164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.915437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.915473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.915672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.915707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.915968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.916001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.916197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.916231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.916487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.916522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.916726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.916760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.916876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.916909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.917119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.917154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.917405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.917439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.917626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.917660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.917856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.917888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.918084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.918118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.918320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.918353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.918618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.918652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.918932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.918966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.919147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.919180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.632 [2024-12-05 14:03:26.919401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.632 [2024-12-05 14:03:26.919436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.632 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.919572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.919605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.919797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.919830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.919979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.920014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.920287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.920321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.920537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.920571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.920768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.920801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.920988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.921023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.921153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.921185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.921391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.921425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.921682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.921716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.921912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.921945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.922132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.922166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.922307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.922347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.922563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.922597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.922739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.922773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.922990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.923024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.923222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.923256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.923547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.923582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.923769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.923804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.924016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.924050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.924255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.924290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.924556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.924592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.924854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.924887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.925069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.925101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.925309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.925350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.925558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.925591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.925743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.925778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.925980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.926021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.926171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.926205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.926391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.633 [2024-12-05 14:03:26.926429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.633 qpair failed and we were unable to recover it. 00:31:44.633 [2024-12-05 14:03:26.926626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.926659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.926927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.926959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.927154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.927188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.927485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.927519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.927773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.927809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.928016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.928050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.928183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.928216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.928418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.928453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.928654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.928688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.928918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.928950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.929084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.929116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.929377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.929412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.929639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.929672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.929945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.929979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.930167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.930198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.930395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.930430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.930568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.930602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.930799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.930833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.931108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.931141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.931409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.931444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.931725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.931757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.931978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.932014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.932201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.932241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.932495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.932530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.932807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.932840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.933042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.933076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.933274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.933307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.933455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.933489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.933742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.634 [2024-12-05 14:03:26.933776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.634 qpair failed and we were unable to recover it. 00:31:44.634 [2024-12-05 14:03:26.934059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.934093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.934389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.934425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.934554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.934586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.934769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.934802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.935078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.935111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.935305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.935337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.935533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.935567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.935821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.935856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.936108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.936141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.936416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.936450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.936667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.936704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.936984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.937015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.937283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.937322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.937545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.937580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.937773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.937803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.938070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.938103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.938312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.938344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.938551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.938584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.938830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.938862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.939112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.939144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.939399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.939431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.939625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.939657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.635 qpair failed and we were unable to recover it. 00:31:44.635 [2024-12-05 14:03:26.939904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.635 [2024-12-05 14:03:26.939937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.940230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.940263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.940535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.940569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.940763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.940795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.941056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.941089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.941360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.941402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.941693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.941724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.941917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.941949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.942199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.942230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.942504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.942537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.942852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.942885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.943088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.943127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.943388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.943422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.943694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.943727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.944005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.944036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.944159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.944192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.944469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.944503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.944694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.944727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.944904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.944936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.945198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.945229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.945441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.945474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.945711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.945743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.945872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.945903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.946112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.946144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.946323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.946355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.946644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.946676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.946940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.946973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.947255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.947287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.947549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.636 [2024-12-05 14:03:26.947582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.636 qpair failed and we were unable to recover it. 00:31:44.636 [2024-12-05 14:03:26.947864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.947896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.948191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.948224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.948446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.948479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.948750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.948782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.949055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.949086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.949285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.949317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.949558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.949592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.949812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.949844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.950113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.950145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.950386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.950420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.950639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.950671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.950952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.950984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.951264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.951296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.951497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.951530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.951728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.951760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.952036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.952069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.952287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.952319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.952634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.952668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.952870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.952903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.953180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.953211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.953490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.953523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.953742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.953775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.953956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.953994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.954241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.954273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.954526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.954559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.954815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.954846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.954979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.955011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.955284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.955317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.955610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.955643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.637 [2024-12-05 14:03:26.955869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.637 [2024-12-05 14:03:26.955901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.637 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.956176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.956209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.956515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.956548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.956805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.956837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.956965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.956997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.957275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.957308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.957586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.957620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.957908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.957940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.958219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.958252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.958387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.958420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.958693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.958726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.958918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.958949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.959215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.959247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.959534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.959570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.959842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.959874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.960152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.960185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.960411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.960445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.960720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.960753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.961025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.961058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.961343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.961385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.961607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.961641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.961921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.961953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.962232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.962265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.962552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.962586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.962861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.962893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.963031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.963064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.963362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.963408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.963712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.963745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.964055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.638 [2024-12-05 14:03:26.964089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.638 qpair failed and we were unable to recover it. 00:31:44.638 [2024-12-05 14:03:26.964349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.964393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.964590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.964623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.964898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.964930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.965188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.965220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.965478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.965518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.965795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.965827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.966106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.966139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.966345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.966387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.966587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.966619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.966770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.966803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.967066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.967097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.967277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.967309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.967605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.967639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.967839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.967871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.968175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.968208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.968389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.968423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.968649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.968681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.968887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.968920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.969141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.969173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.969385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.969420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.969698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.969731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.970009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.970040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.970247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.970279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.970439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.970473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.970701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.970734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.970987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.971019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.971281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.971312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.639 [2024-12-05 14:03:26.971473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.639 [2024-12-05 14:03:26.971506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.639 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.971707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.971738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.971943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.971975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.972185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.972218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.972476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.972510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.972706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.972739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.972918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.972950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.973249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.973281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.973533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.973567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.973875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.973906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.974220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.974252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.974520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.974553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.974853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.974884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.975208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.975240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.975504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.975537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.975714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.975746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.976019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.976052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.976252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.976292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.976474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.976507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.976758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.976790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.976983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.977017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.977299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.977331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.977594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.977627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.977807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.977840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.978112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.978144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.978267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.978299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.978550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.978583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.978800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.978833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.979052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.979084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.640 [2024-12-05 14:03:26.979405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.640 [2024-12-05 14:03:26.979439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.640 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.979719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.979752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.980064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.980096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.980353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.980395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.980697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.980729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.980987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.981019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.981243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.981275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.981556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.981593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.981874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.981907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.982151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.982184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.982424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.982460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.982662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.982695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.982972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.983005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.983203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.983235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.983510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.983543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.983751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.983785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.984024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.984055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.984307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.984340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.984645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.984678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.984941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.984973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.985105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.985137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.985393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.985427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.985727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.985759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.986045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.986078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.986352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.641 [2024-12-05 14:03:26.986409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.641 qpair failed and we were unable to recover it. 00:31:44.641 [2024-12-05 14:03:26.986682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.986715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.986929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.986961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.987179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.987211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.987482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.987522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.987724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.987758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.987957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.987989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.988187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.988218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.988492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.988526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.988736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.988768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.988962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.988994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.989310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.989341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.989563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.989596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.989868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.989901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.990194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.990226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.990502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.990535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.990730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.990763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.991022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.991059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.991344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.991391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.991597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.991628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.991822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.991855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.992111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.992142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.992259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.992291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.992578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.992611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.992846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.992879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.993100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.993132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.993390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.993424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.993673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.993706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.993846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.993879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.994159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.994191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.642 qpair failed and we were unable to recover it. 00:31:44.642 [2024-12-05 14:03:26.994407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.642 [2024-12-05 14:03:26.994440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.994629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.994663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.994919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.994952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.995205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.995236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.995465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.995499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.995768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.995801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.996073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.996106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.996398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.996433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.996707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.996740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.996945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.996977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.997239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.997272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.997540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.997574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.997769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.997802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.997986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.998019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.998214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.998255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.998524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.998557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.998736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.998768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.999023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.999056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.999265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.999297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.999562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.999596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:26.999790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:26.999822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:27.000092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:27.000124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:27.000418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:27.000453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:27.000673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:27.000706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:27.000915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:27.000947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:27.001170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:27.001201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:27.001405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:27.001439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:27.001693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:27.001725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:27.002022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:27.002055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.643 qpair failed and we were unable to recover it. 00:31:44.643 [2024-12-05 14:03:27.002240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.643 [2024-12-05 14:03:27.002272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.002518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.002551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.002695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.002727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.003002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.003034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.003231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.003263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.003521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.003559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.003833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.003865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.004145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.004178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.004390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.004424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.004710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.004744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.004993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.005026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.005286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.005320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.005554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.005588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.005878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.005911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.006186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.006218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.006501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.006535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.006734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.006767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.006974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.007008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.007258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.007290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.007485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.007519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.007822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.007854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.008138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.008171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.008349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.008391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.008701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.008734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.008939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.008971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.009248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.009286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.009492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.009526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.009753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.009785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.009991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.010023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.010311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.644 [2024-12-05 14:03:27.010344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.644 qpair failed and we were unable to recover it. 00:31:44.644 [2024-12-05 14:03:27.010624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.010657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.010910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.010942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.011162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.011194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.011454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.011489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.011794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.011826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.012031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.012063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.012246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.012279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.012494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.012528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.012781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.012813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.013123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.013155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.013413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.013446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.013677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.013709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.014007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.014039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.014230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.014263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.014418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.014451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.014707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.014739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.014956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.014988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.015250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.015281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.015582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.015616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.015864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.015896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.016089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.016121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.016312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.016344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.016640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.016675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.016911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.016943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.017236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.017268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.017478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.017512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.017651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.017683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.017881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.017913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.018185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.018217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.645 qpair failed and we were unable to recover it. 00:31:44.645 [2024-12-05 14:03:27.018397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.645 [2024-12-05 14:03:27.018430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.018705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.018737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.019022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.019054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.019183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.019215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.019511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.019545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.019741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.019772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.020027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.020060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.020383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.020417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.020610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.020642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.020834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.020866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.021148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.021181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.021302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.021334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.021554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.021587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.021769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.021801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.022077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.022109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.022406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.022440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.022559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.022591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.022841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.022873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.023075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.023108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.023299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.023332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.023604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.023638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.023820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.023851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.024110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.024142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.024427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.024461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.024721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.024753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.025019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.025051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.025263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.025296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.025427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.025458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.025734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.646 [2024-12-05 14:03:27.025766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.646 qpair failed and we were unable to recover it. 00:31:44.646 [2024-12-05 14:03:27.026050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.026082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.026280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.026313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.026579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.026611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.026884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.026916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.027181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.027220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.027507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.027541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.027745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.027777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.028033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.028065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.028320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.028351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.028574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.028607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.028862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.028895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.029200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.029232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.029425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.029458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.029752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.029785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.030058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.030091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.030304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.030335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.030629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.030663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.030822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.030855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.031144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.031177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.031384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.031416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.031688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.031720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.031906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.031939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.032206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.647 [2024-12-05 14:03:27.032237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.647 qpair failed and we were unable to recover it. 00:31:44.647 [2024-12-05 14:03:27.032516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.032549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.032702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.032734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.033018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.033052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.033240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.033272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.033503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.033536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.033836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.033869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.034070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.034102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.034284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.034316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.034592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.034625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.034814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.034845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.034991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.035024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.035299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.035330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.035643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.035676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.035853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.035884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.036070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.036102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.036283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.036316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.036619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.036651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.036931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.036964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.037158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.037191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.037337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.037379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.037677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.037710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.037854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.037893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.038196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.038228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.038517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.038551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.038848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.038881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.039193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.039225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.039511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.039545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.039797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.039830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.648 [2024-12-05 14:03:27.040018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.648 [2024-12-05 14:03:27.040051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.648 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.040248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.040280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.040565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.040598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.040779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.040810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.041009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.041041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.041293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.041325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.041535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.041569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.041713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.041746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.041855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.041887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.042090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.042121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.042328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.042361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.042510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.042542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.042789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.042820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.043021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.043053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.043342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.043387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.043528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.043559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.043688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.043720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.043907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.043939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.044089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.044121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.044381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.044415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.044621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.044653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.044901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.044933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.045135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.045166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.045365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.045409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.045655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.045687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.045881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.045914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.046047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.046079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.046403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.046436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.046639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.046672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.649 [2024-12-05 14:03:27.046938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.649 [2024-12-05 14:03:27.046972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.649 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.047227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.047259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.047457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.047491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.047699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.047731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.047920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.047960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.048183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.048217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.048428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.048462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.048717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.048752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.048942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.048978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.049172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.049204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.049336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.049382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.049582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.049615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.049892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.049929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.050119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.050151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.050364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.050408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.050613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.050647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.050846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.050879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.051074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.051106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.051237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.051270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.051452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.051486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.051773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.051806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.052093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.052126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.052353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.052402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.052537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.052570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.052781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.052813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.053006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.053039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.053162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.053195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.053337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.053380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.053597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.053630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.053855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.053889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.650 qpair failed and we were unable to recover it. 00:31:44.650 [2024-12-05 14:03:27.054010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.650 [2024-12-05 14:03:27.054041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.054256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.054288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.054550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.054585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.054711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.054744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.054896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.054928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.055072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.055107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.055308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.055342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.055486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.055519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.055716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.055748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.055862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.055894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.056106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.056139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.056345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.056391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.056511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.056544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.056743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.056775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.057034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.057077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.057289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.057321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.057468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.057502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.057702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.057734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.057925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.057957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.058159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.058191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.058446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.058480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.058687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.058721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.058873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.058908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.059173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.059207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.059355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.059399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.059605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.059637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.059829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.059863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.060065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.060097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.060288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.060323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.060454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.060486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.651 [2024-12-05 14:03:27.060669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.651 [2024-12-05 14:03:27.060703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.651 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.060851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.060884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.061017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.061051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.061241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.061273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.061459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.061493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.061677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.061710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.061860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.061893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.062089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.062123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.062316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.062350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.062663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.062697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.062816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.062850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.063088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.063123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.063386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.063421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.063618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.063652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.063865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.063897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.064093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.064125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.064251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.064284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.064469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.064504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.064768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.064802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.064934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.064969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.065224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.065260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.065483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.065517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.065727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.065761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.065964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.065998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.066216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.066255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.066401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.066437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.066663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.066696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.066896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.066930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.067130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.067164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.067380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.652 [2024-12-05 14:03:27.067416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.652 qpair failed and we were unable to recover it. 00:31:44.652 [2024-12-05 14:03:27.067546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.067580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.067736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.067769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.068024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.068060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.068195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.068226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.068433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.068466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.068600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.068633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.068820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.068853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.069131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.069165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.069441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.069475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.069670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.069703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.069991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.070023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.070276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.070308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.070539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.070575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.070784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.070816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.071123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.071157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.071301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.071335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.071470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.071503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.071640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.071673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.071797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.071833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.071957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.071989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.072100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.072132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.072336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.072383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.072524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.072560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.072780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.072814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.653 [2024-12-05 14:03:27.072980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.653 [2024-12-05 14:03:27.073013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.653 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.073193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.073227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.073439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.073473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.073748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.073780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.073923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.073955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.074114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.074145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.074334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.074379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.074558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.074590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.074705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.074738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.074941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.074975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.075175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.075217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.075403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.075435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.075623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.075655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.075840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.075873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.075987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.076023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.076269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.076300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.076498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.076531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.076747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.076778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.076965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.076996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.077215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.077248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.077513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.077547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.077677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.077709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.077908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.077940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.078124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.078156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.078353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.078408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.078621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.078655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.078919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.078952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.079144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.079176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.079395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.654 [2024-12-05 14:03:27.079428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.654 qpair failed and we were unable to recover it. 00:31:44.654 [2024-12-05 14:03:27.079622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.079654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.079960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.079992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.080116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.080147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.080329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.080360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.080640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.080673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.080950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.080983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.081206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.081239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.081426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.081459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.081670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.081701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.081969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.082002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.082147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.082179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.082298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.082333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.082530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.082563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.082706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.082737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.082940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.082972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.083163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.083195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.083406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.083440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.083663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.083694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.083856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.083888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.084069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.084102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.084288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.084320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.084516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.084556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.084735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.084766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.084960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.084994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.085208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.085243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.085421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.085454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.085753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.085785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.085988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.086022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.655 [2024-12-05 14:03:27.086209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.655 [2024-12-05 14:03:27.086241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.655 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.086435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.086469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.086648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.086681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.086867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.086899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.087094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.087126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.087258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.087293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.087402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.087434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.087561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.087594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.087839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.087871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.088079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.088112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.088358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.088441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.088652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.088708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.088898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.088922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.089187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.089211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.089383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.089408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.089675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.089699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.089812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.089834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.090008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.090030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.090259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.090282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.090390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.090419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.090519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.090548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.090719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.090741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.090836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.090856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.091022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.091044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.091225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.091249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.091431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.091455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.091629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.091651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.091756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.091782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.091974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.656 [2024-12-05 14:03:27.091997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.656 qpair failed and we were unable to recover it. 00:31:44.656 [2024-12-05 14:03:27.092244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.092266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.092440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.092463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.092711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.092733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.092903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.092925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.093199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.093224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.093356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.093385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.093559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.093581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.093721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.093754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.093864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.093897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.094094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.094129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.094268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.094291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.094393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.094415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.094512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.094533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.094692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.094714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.094969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.094992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.095076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.095096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.095254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.095275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.095382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.095404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.095575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.095622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.095746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.095778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.095892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.095923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.096176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.096207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.096314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.096336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.096519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.096544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.096795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.096817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.096990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.097013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.097197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.097228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.097435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.097472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.097680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.097713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.657 [2024-12-05 14:03:27.097837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.657 [2024-12-05 14:03:27.097869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.657 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.098064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.098098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.098381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.098404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.098508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.098531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.098688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.098710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.098983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.099004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.099255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.099278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.099394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.099418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.099571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.099593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.099764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.099787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.099980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.100012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.100259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.100291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.100484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.100525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.100728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.100750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.100843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.100863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.101042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.101065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.101292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.101315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.101494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.101520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.101644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.101676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.101860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.101893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.102007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.102039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.102149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.102182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.102382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.102416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.102626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.102666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.102832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.102857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.103038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.103072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.103302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.103335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.103480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.103516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.103646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.103669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.103846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.103869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.103964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.658 [2024-12-05 14:03:27.103991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.658 qpair failed and we were unable to recover it. 00:31:44.658 [2024-12-05 14:03:27.104213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.104260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.104387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.104421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.104622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.104655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.104920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.104952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.105186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.105220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.105335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.105385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.105631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.105664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.105771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.105803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.105926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.105957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.106206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.106239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.106492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.106516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.106685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.106707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.106870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.106891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.107066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.107090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.107244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.107268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.107445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.107468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.107638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.107662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.107766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.107788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.107945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.107969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.108136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.108160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.108365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.108397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.108560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.108583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.659 [2024-12-05 14:03:27.108695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.659 [2024-12-05 14:03:27.108718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.659 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.108910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.108932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.109103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.109125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.109219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.109243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.109356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.109389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.109498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.109520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.109619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.109642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.109752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.109778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.110003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.110027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.110116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.110139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.110297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.110320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.110546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.110570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.110756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.110779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.110950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.110972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.111147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.111170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.111268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.111289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.111398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.111422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.111573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.111595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.111708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.111730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.111896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.111918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.112100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.112124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.112302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.112324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.112418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.112440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.112626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.112649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.112801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.112823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.113105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.113138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.113278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.113301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.113415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.113436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.113606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.113630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.113727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.660 [2024-12-05 14:03:27.113749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.660 qpair failed and we were unable to recover it. 00:31:44.660 [2024-12-05 14:03:27.113939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.113963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.114078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.114099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.114319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.114341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.114452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.114477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.114634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.114655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.114879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.114912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.115034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.115067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.115239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.115271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.115522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.115545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.115735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.115776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.115914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.115945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.116160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.116193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.116409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.116433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.116654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.116675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.116829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.116851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.116953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.116983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.117158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.117181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.117353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.117382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.117481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.117502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.117690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.117712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.117935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.117967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.118098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.118131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.118317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.118351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.118552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.118575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.118692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.118714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.118939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.118960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.119123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.119147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.119233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.119256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.119490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.119514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.661 [2024-12-05 14:03:27.119628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.661 [2024-12-05 14:03:27.119652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.661 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.119806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.119828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.120049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.120081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.120187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.120220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.120409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.120442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.120570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.120593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.120768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.120790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.120944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.120967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.121068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.121090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.121261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.121284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.121382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.121404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.121574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.121597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.121817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.121840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.122045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.122072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.122239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.122262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.122381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.122405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.122575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.122597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.122825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.122847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.122939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.122961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.123046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.123070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.123270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.123294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.123544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.123569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.123723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.123746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.123910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.123962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.124087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.124118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.124244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.124277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.124409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.124443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.124640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.124662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.662 [2024-12-05 14:03:27.124813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.662 [2024-12-05 14:03:27.124835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.662 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.125079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.125113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.125292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.125325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.125524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.125564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.125714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.125735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.125905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.125927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.126111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.126133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.126249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.126272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.126515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.126538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.126762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.126784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.126934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.126955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.127059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.127083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.127264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.127286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.127453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.127477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.127564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.127585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.127696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.127717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.127804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.127824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.127995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.128018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.128110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.128133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.128283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.128305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.128406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.128429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.128652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.128674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.128762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.128785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.128960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.128986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.129105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.129127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.129220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.129242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.129391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.129417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.129528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.129550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.129719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.663 [2024-12-05 14:03:27.129741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.663 qpair failed and we were unable to recover it. 00:31:44.663 [2024-12-05 14:03:27.129894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.129936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.130107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.130140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.130244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.130277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.130413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.130446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.130622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.130644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.130798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.130820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.130901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.130921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.131081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.131126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.131391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.131425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.131597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.131630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.131769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.131790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.132051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.132083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.132213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.132244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.132364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.132411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.132540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.132584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.132687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.132709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.132955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.132977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.133232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.133255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.133351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.133379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.133489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.133511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.133670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.133692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.133803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.133826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.134049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.134072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.134190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.134212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.134309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.134331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.134453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.134475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.134698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.134720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.134823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.134844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.135031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.664 [2024-12-05 14:03:27.135053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.664 qpair failed and we were unable to recover it. 00:31:44.664 [2024-12-05 14:03:27.135293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.135315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.135411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.135433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.135543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.135566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.135734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.135756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.135913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.135935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.136026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.136048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.136160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.136181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.136267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.136287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.136381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.136404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.136556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.136596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.136714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.136749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.136864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.136897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.137031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.137063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.137179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.137210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.137332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.137353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.137512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.137538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.137685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.137706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.137879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.137901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.137984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.138004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.138171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.138192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.138342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.138365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.138461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.138483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.138725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.138748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.138854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.138876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.665 [2024-12-05 14:03:27.138969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.665 [2024-12-05 14:03:27.138990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.665 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.139211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.139232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.139315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.139338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.139530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.139553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.139652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.139672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.139768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.139790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.139881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.139902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.140004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.140026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.140118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.140141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.140298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.140321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.140560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.140583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.140837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.140860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.141048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.141085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.141202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.141235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.141424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.141457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.141645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.141666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.141846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.141877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.142003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.142036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.142162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.142195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.142388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.142422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.142663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.142695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.142806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.142837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.143010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.143041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.143301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.143323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.143541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.143564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.143737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.143759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.143928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.143968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.144216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.144248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.144426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.144474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.144651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.666 [2024-12-05 14:03:27.144673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.666 qpair failed and we were unable to recover it. 00:31:44.666 [2024-12-05 14:03:27.144943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.144978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.145195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.145226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.145411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.145433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.145533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.145556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.145704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.145727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.145920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.145941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.146181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.146203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.146310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.146332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.146438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.146460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.146548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.146570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.146840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.146862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.146952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.146973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.147149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.147172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.147278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.147302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.147462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.147486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.147582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.147605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.147765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.147786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.147965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.147996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.148239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.148272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.148519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.148552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.148667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.148699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.148942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.148963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.149059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.149081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.149165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.149192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.149273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.149298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.149414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.149436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.149586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.149608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.149689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.149710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.149952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.149974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.150138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.150171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.150376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.150409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.150526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.150557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.150745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.150777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.667 qpair failed and we were unable to recover it. 00:31:44.667 [2024-12-05 14:03:27.150974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.667 [2024-12-05 14:03:27.151007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.151215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.151247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.151420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.151443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.151561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.151584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.151680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.151702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.151797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.151819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.151971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.151992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.152150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.152171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.152324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.152346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.152568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.152591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.152692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.152714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.152823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.152845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.153025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.153046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.153127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.153150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.153401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.153424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.153609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.153641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.153883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.153916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.154193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.154235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.154373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.154395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.154594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.154615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.154745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.154767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.154864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.154886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.155039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.155062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.155210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.155232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.155350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.155390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.155490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.155512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.155625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.155647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.155728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.155749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.155838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.155860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.155960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.155982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.156226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.156247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.156343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.156372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.156455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.156475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.156575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.156597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.668 [2024-12-05 14:03:27.156785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.668 [2024-12-05 14:03:27.156808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.668 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.156892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.156913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.157064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.157086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.157197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.157218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.157341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.157362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.157466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.157488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.157590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.157612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.157720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.157741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.157835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.157856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.158029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.158051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.158135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.158156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.158323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.158345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.158517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.158541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.158703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.158725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.158828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.158850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.158967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.158989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.159083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.159104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.159275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.159296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.159450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.159472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.159620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.159642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.159795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.159817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.159970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.159992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.160171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.160193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.160341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.160364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.160547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.160573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.160680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.160703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.160851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.160872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.160976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.160997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.161096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.161118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.161210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.161231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.161330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.161352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.161447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.161469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.161560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.161583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.161666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.161688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.161857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.161879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.161973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.161995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.162077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.669 [2024-12-05 14:03:27.162098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.669 qpair failed and we were unable to recover it. 00:31:44.669 [2024-12-05 14:03:27.162260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.162281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.162377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.162400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.162487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.162507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.162589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.162611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.162717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.162743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.162837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.162858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.163048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.163070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.163237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.163259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.163352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.163380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.163532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.163554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.163694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.163718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.163893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.163914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.164072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.164095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.164219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.164243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.164347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.164381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.164492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.164515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.164609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.164631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.164724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.164746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.164837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.164859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.164953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.164975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.165080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.165101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.165363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.165392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.165493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.165515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.165644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.165666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.165780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.165803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.165963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.165986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.166162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.166184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.166278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.166301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.166463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.166486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.166603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.166625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.166785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.166806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.166903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.166925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.167144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.167166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.167408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.167430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.167584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.167606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.167778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.167809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.670 [2024-12-05 14:03:27.168009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.670 [2024-12-05 14:03:27.168041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.670 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.168216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.168247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.168417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.168440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.168654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.168686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.168803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.168836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.169080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.169111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.169353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.169395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.169574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.169598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.169745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.169766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.169878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.169900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.170130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.170161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.170338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.170377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.170512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.170545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.170659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.170691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.170810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.170832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.171005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.171026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.171242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.171264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.171432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.171456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.171619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.171640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.171808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.171845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.172052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.172083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.172253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.172285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.172463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.172497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.172611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.172643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.172831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.172862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.173054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.173085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.173203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.173234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.173349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.173391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.173589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.173611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.671 qpair failed and we were unable to recover it. 00:31:44.671 [2024-12-05 14:03:27.173758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.671 [2024-12-05 14:03:27.173780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.173937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.173959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.174147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.174169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.174366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.174393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.174509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.174531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.174684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.174706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.174897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.174918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.175023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.175044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.175143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.175165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.175386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.175408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.175564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.175585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.175690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.175711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.175802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.175824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.175910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.175931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.176077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.176100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.176180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.176201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.176300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.176321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.176540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.176567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.176684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.176705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.176799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.176821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.176972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.176994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.177077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.177099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.177342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.177384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.177495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.177516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.177696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.177717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.177891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.177914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.178094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.178115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.178287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.178309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.178458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.178480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.672 [2024-12-05 14:03:27.178631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.672 [2024-12-05 14:03:27.178653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.672 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.178807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.178828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.179057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.179132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.179336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.179387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.179501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.179534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.179791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.179823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.179947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.179980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.180095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.180126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.180253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.180285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.180527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.180562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.180840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.180865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.181086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.181108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.181266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.181288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.181411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.181434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.181593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.181614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.181788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.181810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.181976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.181998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.182157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.182179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.182360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.182387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.182550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.182572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.182811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.182835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.182930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.182951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.183139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.183160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.183323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.183346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.183602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.183635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.972 qpair failed and we were unable to recover it. 00:31:44.972 [2024-12-05 14:03:27.183935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.972 [2024-12-05 14:03:27.183966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.184068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.184100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.184312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.184344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.184586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.184609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.184783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.184805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.184958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.184989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.185164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.185197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.185391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.185425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.185603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.185635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.185756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.185789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.185901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.185933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.186037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.186068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.186327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.186358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.186548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.186580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.186764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.186797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.186973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.186995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.187170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.187202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.187387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.187421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.187608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.187640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.187804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.187825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.188012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.188044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.188158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.188189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.188387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.188420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.188602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.188634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.188817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.188838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.189010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.189032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.189212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.189244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.189439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.189472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.189660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.189692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.189819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.189854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.189937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.189957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.190053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.190082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.190177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.190199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.190390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.190413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.190577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.190601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.190693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.190713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.190869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.190891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.973 [2024-12-05 14:03:27.191110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.973 [2024-12-05 14:03:27.191132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.973 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.191219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.191239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.191433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.191455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.191623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.191656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.191834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.191865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.191991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.192022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.192176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.192208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.192425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.192460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.192656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.192688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.192866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.192897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.193109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.193141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.193252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.193284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.193409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.193447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.193687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.193709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.193811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.193832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.194091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.194112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.194348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.194391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.194557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.194579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.194735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.194758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.194986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.195025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.195230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.195262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.195448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.195472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.195724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.195746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.195903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.195925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.196027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.196051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.196152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.196173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.196289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.196313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.196488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.196512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.196673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.196696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.196859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.196882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.196994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.197016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.197168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.197193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.197300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.197322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.197477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.197501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.197603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.197628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.197792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.197819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.197948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.197970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.198088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.198109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.198238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.974 [2024-12-05 14:03:27.198260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.974 qpair failed and we were unable to recover it. 00:31:44.974 [2024-12-05 14:03:27.198349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.198375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.198476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.198497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.198655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.198676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.198831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.198855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.198951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.198973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.199122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.199145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.199231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.199251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.199341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.199364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.199481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.199502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.199746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.199769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.199937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.199959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.200110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.200131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.200361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.200404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.200566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.200588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.200671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.200691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.200904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.200927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.201061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.201082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.201243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.201264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.201355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.201384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.201569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.201590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.201808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.201830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.201927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.201951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.202110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.202132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.202240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.202265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.202411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.202433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.202588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.202609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.202816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.202838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.202953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.202975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.203135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.203158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.203268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.203290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.203398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.203421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.203599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.203642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.203759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.203790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.203977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.204009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.204280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.204313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.204566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.204599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.204733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.204766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.204939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.975 [2024-12-05 14:03:27.204961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.975 qpair failed and we were unable to recover it. 00:31:44.975 [2024-12-05 14:03:27.205123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.205145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.205416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.205438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.205605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.205627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.205819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.205852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.205958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.205991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.206111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.206144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.206326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.206357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.206657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.206690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.206902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.206934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.207119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.207151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.207389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.207422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.207702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.207724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.207883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.207905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.208021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.208043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.208307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.208327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.208528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.208552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.208646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.208669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.208834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.208856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.209012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.209033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.209248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.209281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.209462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.209495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.209681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.209713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.209897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.209918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.210104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.210135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.210258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.210290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.210400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.210438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.210632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.210670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.210846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.210868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.210963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.210986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.211087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.211108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.211274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.211297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.211400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.211422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.211590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.211613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.976 qpair failed and we were unable to recover it. 00:31:44.976 [2024-12-05 14:03:27.211765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.976 [2024-12-05 14:03:27.211786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.211952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.211973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.212125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.212147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.212321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.212342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.212645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.212668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.212778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.212802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.212903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.212925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.213079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.213101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.213202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.213224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.213386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.213409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.213573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.213597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.213696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.213717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.213891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.213914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.214001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.214021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.214120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.214141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.214359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.214407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.214542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.214574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.214682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.214713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.214897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.214930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.215173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.215194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.215292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.215318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.215489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.215512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.215757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.215778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.215937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.215958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.216060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.216082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.216240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.216262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.216360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.216388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.216612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.216636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.216757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.216778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.216946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.216968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.217174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.217196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.217410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.217434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.217679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.217701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.217935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.217956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.218066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.218088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.218334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.218355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.218518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.218540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.977 [2024-12-05 14:03:27.218704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.977 [2024-12-05 14:03:27.218726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.977 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.218893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.218915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.219015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.219037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.219123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.219144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.219387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.219408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.219641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.219663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.219754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.219776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.219881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.219902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.220054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.220075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.220250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.220287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.220473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.220506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.220625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.220657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.220785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.220817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.221017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.221038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.221191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.221213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.221296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.221317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.221430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.221452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.221557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.221579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.221828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.221850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.221932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.221951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.222114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.222135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.222291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.222312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.222413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.222435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.222583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.222604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.222822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.222859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.223035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.223068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.223351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.223394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.223621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.223653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.223790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.223822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.224002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.224023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.224220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.224252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.224381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.224414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.224641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.224663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.224830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.224852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.225017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.225038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.225149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.225170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.225333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.225355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.225482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.225504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.225734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.978 [2024-12-05 14:03:27.225756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.978 qpair failed and we were unable to recover it. 00:31:44.978 [2024-12-05 14:03:27.225916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.225937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.226089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.226111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.226291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.226313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.226409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.226431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.226604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.226624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.226793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.226814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.226915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.226937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.227038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.227059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.227151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.227172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.227263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.227285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.227443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.227466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.227562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.227584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.227667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.227697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.227855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.227876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.228029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.228051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.228140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.228162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.228310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.228332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.228504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.228525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.228670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.228708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.228970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.229002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.229178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.229210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.229404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.229438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.229574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.229595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.229752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.229774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.230017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.230039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.230195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.230217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.230401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.230434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.230557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.230588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.230696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.230727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.230848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.230878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.231042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.231064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.231288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.231310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.231405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.231425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.231610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.231631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.231845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.231866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.232013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.232034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.232124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.232144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.232252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.232274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.979 qpair failed and we were unable to recover it. 00:31:44.979 [2024-12-05 14:03:27.232362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.979 [2024-12-05 14:03:27.232388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.232544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.232565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.232678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.232699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.232866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.232887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.233070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.233091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.233203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.233224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.233443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.233465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.233557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.233579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.233671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.233693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.233851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.233872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.234023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.234044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.234276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.234297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.234456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.234479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.234610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.234631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.234801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.234823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.234922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.234956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.235120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.235141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.235289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.235311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.235403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.235424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.235667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.235689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.235910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.235931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.236023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.236044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.236145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.236166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.236265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.236286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.236377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.236399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.236512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.236533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.236765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.236787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.236942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.236963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.237064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.237085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.237382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.237405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.237510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.237531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.237767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.237798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.237988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.238020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.238216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.238248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.238364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.238408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.238532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.238563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.238751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.238782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.238915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.238936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.239056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.980 [2024-12-05 14:03:27.239077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.980 qpair failed and we were unable to recover it. 00:31:44.980 [2024-12-05 14:03:27.239163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.239182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.239424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.239447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.239552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.239574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.239745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.239767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.239869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.239891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.240059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.240080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.240174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.240195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.240283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.240305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.240393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.240414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.240573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.240595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.240782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.240814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.240986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.241017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.241191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.241223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.241328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.241360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.241476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.241510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.241684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.241719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.241841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.241872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.242119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.242191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.242394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.242432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.242556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.242589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.242797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.242829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.242955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.242987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.243106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.243138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.243255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.243288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.243415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.243448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.243638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.243663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.243771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.243792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.243876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.243896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.244045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.244066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.244234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.244255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.244419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.244440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.244606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.244628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.244728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.244750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.981 [2024-12-05 14:03:27.244983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.981 [2024-12-05 14:03:27.245026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.981 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.245194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.245226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.245332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.245363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.245489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.245521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.245690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.245711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.245828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.245850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.246026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.246048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.246215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.246236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.246332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.246353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.246541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.246563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.246669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.246689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.246780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.246805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.246908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.246930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.247020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.247042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.247147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.247168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.247251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.247271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.247383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.247405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.247556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.247577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.247758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.247780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.247951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.247972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.248069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.248090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.248211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.248233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.248345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.248374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.248475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.248504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.248659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.248681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.248868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.248892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.249022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.249054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.249189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.249221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.249407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.249440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.249617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.249639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.249742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.249766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.249923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.249944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.250099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.250120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.250288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.250309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.250420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.250442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.250534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.250556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.250721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.250742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.250894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.250915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.251079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.982 [2024-12-05 14:03:27.251105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.982 qpair failed and we were unable to recover it. 00:31:44.982 [2024-12-05 14:03:27.251299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.251321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.251413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.251435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.251530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.251551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.251711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.251732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.251815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.251837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.251916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.251939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.252088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.252110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.252203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.252225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.252305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.252326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.252413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.252434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.252581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.252602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.252691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.252724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.252809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.252830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.252923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.252945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.253096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.253118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.253200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.253222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.253382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.253404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.253502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.253523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.253620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.253641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.253724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.253746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.253907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.253929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.254010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.254031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.254183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.254204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.254287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.254308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.254404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.254427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.254515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.254536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.254683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.254704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.254865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.254887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.254984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.255005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.255089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.255111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.255196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.255218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.255305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.255327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.255434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.255456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.255550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.255571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.255658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.255679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.255826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.255848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.255933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.255954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.256125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.256146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.983 qpair failed and we were unable to recover it. 00:31:44.983 [2024-12-05 14:03:27.256228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.983 [2024-12-05 14:03:27.256249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.256328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.256349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.256507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.256533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.256694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.256716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.256891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.256922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.257101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.257133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.257302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.257334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.257584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.257618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.257799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.257832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.257951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.257982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.258094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.258125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.258229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.258261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.258414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.258447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.258628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.258660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.258828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.258850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.258999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.259038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.259212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.259250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.259499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.259533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.259660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.259692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.259861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.259904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.259999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.260021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.260178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.260200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.260362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.260390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.260470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.260494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.260668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.260690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.260860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.260882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.260966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.260988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.261163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.261184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.261265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.261286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.261461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.261490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.261586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.261607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.261773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.261794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.261881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.261903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.261985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.262007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.262088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.262111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.262276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.262298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.262396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.262419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.262681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.262702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.262785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.984 [2024-12-05 14:03:27.262806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.984 qpair failed and we were unable to recover it. 00:31:44.984 [2024-12-05 14:03:27.262967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.262989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.263134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.263156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.263374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.263397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.263548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.263570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.263668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.263690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.263837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.263858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.263963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.263985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.264136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.264158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.264326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.264347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.264598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.264621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.264706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.264728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.264880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.264901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.265002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.265024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.265201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.265223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.265333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.265354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.265524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.265547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.265647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.265668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.265759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.265781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.265936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.265957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.266053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.266074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.266157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.266178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.266345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.266383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.266546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.266568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.266663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.266684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.266768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.266789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.266940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.266961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.267072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.267093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.267273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.267295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.267393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.267416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.267562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.267584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.267665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.267685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.267762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.267786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.267871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.267893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.267977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.267998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.268162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.268183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.268330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.268351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.268526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.268558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.268669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.268701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.985 qpair failed and we were unable to recover it. 00:31:44.985 [2024-12-05 14:03:27.268868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.985 [2024-12-05 14:03:27.268900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.269026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.269058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.269283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.269304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.269506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.269528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.269607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.269627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.269845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.269866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.270032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.270054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.270206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.270247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.270440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.270473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.270658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.270690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.270927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.270948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.271115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.271137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.271309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.271330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.271429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.271452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.271545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.271567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.271713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.271735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.271830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.271852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.272026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.272047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.272209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.272230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.272456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.272478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.272562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.272586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.272687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.272708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.272803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.272824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.272973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.272994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.273154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.273175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.273265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.273286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.273440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.273462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.273695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.273716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.273873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.273895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.273979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.274000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.274220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.274241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.274333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.274354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.274448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.274470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.274693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.986 [2024-12-05 14:03:27.274714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.986 qpair failed and we were unable to recover it. 00:31:44.986 [2024-12-05 14:03:27.274879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.274902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.275054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.275076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.275153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.275174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.275342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.275391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.275567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.275598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.275769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.275801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.276036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.276058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.276233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.276254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.276432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.276454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.276636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.276668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.276925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.276957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.277078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.277109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.277225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.277257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.277380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.277419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.277617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.277657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.277810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.277831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.277921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.277942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.278049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.278070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.278149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.278169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.278262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.278283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.278475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.278497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.278589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.278611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.278847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.278868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.279014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.279035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.279196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.279219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.279328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.279349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.279446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.279468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.279695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.279721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.279827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.279849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.279955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.279976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.280071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.280093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.280250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.280272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.280490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.280513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.280680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.280701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.280809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.280830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.280937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.280958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.281069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.281090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.281238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.281259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.987 [2024-12-05 14:03:27.281418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.987 [2024-12-05 14:03:27.281440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.987 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.281629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.281650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.281809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.281831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.281986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.282007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.282124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.282145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.282330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.282378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.282507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.282538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.282741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.282773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.282939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.282961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.283066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.283088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.283248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.283270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.283428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.283450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.283688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.283720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.283981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.284012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.284202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.284234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.284414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.284447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.284683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.284714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.284825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.284857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.285036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.285068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.285197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.285229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.285349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.285390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.285635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.285667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.285790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.285811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.285908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.285929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.286075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.286096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.286180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.286202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.286389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.286411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.286675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.286697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.286913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.286934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.287032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.287053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.287152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.287174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.287364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.287391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.287541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.287563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.287715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.287753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.287866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.287897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.288154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.288186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.288387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.288421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.288664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.288695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.288816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.288837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.988 qpair failed and we were unable to recover it. 00:31:44.988 [2024-12-05 14:03:27.289049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.988 [2024-12-05 14:03:27.289071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.289230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.289252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.289475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.289508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.289702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.289734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.289858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.289889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.290145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.290166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.290350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.290377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.290543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.290574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.290758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.290789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.290904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.290935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.291105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.291137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.291309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.291341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.291564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.291597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.291858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.291880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.292097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.292119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.292306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.292327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.292441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.292463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.292639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.292661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.292823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.292848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.293008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.293040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.293207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.293238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.293354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.293396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.293588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.293619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.293819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.293841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.293936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.293957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.294045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.294065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.294147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.294169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.294254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.294275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.294442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.294465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.294634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.294655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.294748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.294769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.294880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.294902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.295075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.295097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.295279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.295311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.295444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.295477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.295594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.295624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.295739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.295772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.295879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.295912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.989 [2024-12-05 14:03:27.296087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.989 [2024-12-05 14:03:27.296110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.989 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.296288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.296331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.296537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.296570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.296750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.296783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.297025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.297057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.297264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.297297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.297508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.297543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.297690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.297725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.297894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.297916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.298075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.298097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.298340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.298384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.298486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.298518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.298687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.298719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.298918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.298952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.299073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.299105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.299230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.299262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.299389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.299423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.299560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.299592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.299795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.299828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.299937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.299969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.300210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.300241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.300417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.300452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.300639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.300671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.300805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.300838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.300959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.300992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.301181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.301203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.301352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.301382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.301479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.301500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.301596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.301619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.301711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.301731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.301900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.301923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.302028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.302050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.302143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.302164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.302255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.302277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.302436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.990 [2024-12-05 14:03:27.302461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.990 qpair failed and we were unable to recover it. 00:31:44.990 [2024-12-05 14:03:27.302551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.302571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.302721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.302744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.302959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.302991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.303171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.303203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.303420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.303453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.303691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.303723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.303905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.303937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.304117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.304140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.304219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.304240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.304419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.304444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.304549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.304571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.304733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.304756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.304921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.304944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.305197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.305237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.305411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.305443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.305626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.305668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.305818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.305839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.306000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.306040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.306253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.306287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.306476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.306509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.306625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.306656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.306844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.306876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.307064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.307106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.307255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.307277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.307515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.307549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.307728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.307760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.307952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.307984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.308121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.308155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.308298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.308330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.308516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.308550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.308671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.308703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.308822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.308855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.309087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.309111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.309266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.309290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.309493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.309527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.309715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.309748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.309952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.309983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.310247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.310268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.991 qpair failed and we were unable to recover it. 00:31:44.991 [2024-12-05 14:03:27.310399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.991 [2024-12-05 14:03:27.310422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.310589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.310611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.310708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.310730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.310897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.310929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.311040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.311072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.311246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.311277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.311520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.311554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.311729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.311763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.311960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.311993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.312180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.312211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.312483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.312517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.312648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.312679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.312852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.312876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.313077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.313112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.313382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.313416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.313601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.313634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.313882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.313908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.314025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.314047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.314204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.314228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.314391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.314415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.314524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.314546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.314711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.314733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.314891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.314915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.315017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.315039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.315253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.315275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.315436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.315458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.315643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.315676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.315792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.315825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.316012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.316045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.316292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.316324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.316447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.316480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.316600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.316632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.316763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.316795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.316919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.316951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.317071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.317102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.317212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.317234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.317406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.317428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.317539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.317560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.317660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.992 [2024-12-05 14:03:27.317680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.992 qpair failed and we were unable to recover it. 00:31:44.992 [2024-12-05 14:03:27.317848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.317870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.318033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.318055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.318140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.318160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.318274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.318296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.318399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.318426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.318581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.318603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.318762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.318785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.318867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.318888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.319046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.319069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.319173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.319195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.319296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.319318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.319412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.319433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.319592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.319614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.319709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.319729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.319881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.319903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.319996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.320016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.320187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.320209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.320318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.320342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.320440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.320461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.320701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.320722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.320918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.320940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.321146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.321169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.321262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.321284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.321396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.321419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.321516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.321537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.321690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.321713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.321875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.321898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.322068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.322090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.322238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.322259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.993 [2024-12-05 14:03:27.322410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.993 [2024-12-05 14:03:27.322448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.993 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.322707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.322740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.323000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.323033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.323255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.323287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.323411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.323446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.323548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.323580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.323762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.323796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.323982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.324016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.324212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.324246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.324444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.324480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.324673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.324706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.324900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.324933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.325205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.325242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.325428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.325463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.325635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.325667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.325930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.325963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.326236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.326274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.326529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.326567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.326753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.326786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.326980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.327012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.327247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.327280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.327415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.327449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.327710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.327743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.327933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.327965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.328226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.328249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.328345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.328373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.328480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.328503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.328668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.328689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.328885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.328917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.329174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.329206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.329332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.329365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.329502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.329534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.329724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.329758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.330017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.330048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.330155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.330187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.330314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.330347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.330473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.994 [2024-12-05 14:03:27.330505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.994 qpair failed and we were unable to recover it. 00:31:44.994 [2024-12-05 14:03:27.330648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.330680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.330937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.330968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.331153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.331185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.331382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.331416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.331602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.331635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.331828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.331851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.331946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.331970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.332122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.332144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.332240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.332262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.332410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.332432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.332526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.332547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.332706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.332729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.332814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.332836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.332992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.333014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.333108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.333128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.333296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.333318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.333466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.333488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.333586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.333608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.333768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.333791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.333959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.333982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.334083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.334106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.334323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.334347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.334456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.334478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.334583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.334605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.334822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.334846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.334934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.334955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.335110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.335132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.335300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.335323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.335409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.995 [2024-12-05 14:03:27.335431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.995 qpair failed and we were unable to recover it. 00:31:44.995 [2024-12-05 14:03:27.335527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.335548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.335733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.335756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.335848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.335871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.336037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.336059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.336144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.336164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.336343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.336373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.336454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.336474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.336582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.336602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.336707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.336729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.336821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.336841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.336926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.336946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.337102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.337123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.337284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.337306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.337478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.337500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.337594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.337615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.337706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.337728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.337840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.337863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.338011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.338034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.338187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.338214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.338296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.338317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.338471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.338494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.338676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.338699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.338792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.338813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.338897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.338918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.339054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.339076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.339229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.339250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.339418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.339441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.339612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.339653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.996 [2024-12-05 14:03:27.339841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.996 [2024-12-05 14:03:27.339873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.996 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.340073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.340106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.340213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.340234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.340410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.340432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.340599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.340622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.340731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.340763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.340949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.340982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.341150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.341182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.341350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.341380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.341473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.341495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.341747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.341769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.341865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.341886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.342046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.342068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.342176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.342200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.342376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.342398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.342549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.342571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.342718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.342740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.342894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.342920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.343013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.343035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.343124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.343145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.343294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.343316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.343407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.343430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.343671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.343694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.343855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.343877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.343966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.343987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.344080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.344101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.344249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.344274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.344358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.344385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.344466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.344488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.344723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.344744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.344942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.344964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.345128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.345162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.345276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.345308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.345419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.345452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.345571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.345603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.345790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.345823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.346015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.346046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.997 [2024-12-05 14:03:27.346230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.997 [2024-12-05 14:03:27.346263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.997 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.346450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.346483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.346667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.346699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.346886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.346918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.347086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.347118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.347297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.347329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.347513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.347546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.347759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.347791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.347975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.348007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.348175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.348207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.348388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.348422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.348597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.348628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.348763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.348795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.348946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.348969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.349085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.349108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.349204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.349225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.349329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.349350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.349447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.349469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.349690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.349711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.349929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.349951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.350050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.350072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.350256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.350282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.350448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.350470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.350676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.350709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.350830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.350861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.351046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.351078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.351267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.351289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.351403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.351425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.351612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.351634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.351738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.351760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.351929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.351951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.352102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.998 [2024-12-05 14:03:27.352123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.998 qpair failed and we were unable to recover it. 00:31:44.998 [2024-12-05 14:03:27.352226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.352249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.352359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.352407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.352563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.352586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.352784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.352807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.352909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.352932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.353026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.353048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.353156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.353178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.353375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.353398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.353552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.353574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.353728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.353750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.353945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.353967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.354072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.354093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.354264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.354286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.354440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.354462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.354691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.354712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.354880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.354919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.355102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.355133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.355420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.355455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.355689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.355721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.355988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.356010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.356172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.356194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.356305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.356328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.356426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.356447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.356595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.356618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.356716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.356736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.356833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.356855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.357003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.357024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.357207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.357229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.357394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.357416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.357526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.357549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.357646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.357668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:44.999 qpair failed and we were unable to recover it. 00:31:44.999 [2024-12-05 14:03:27.357832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:44.999 [2024-12-05 14:03:27.357853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.357948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.357968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.358214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.358237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.358453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.358476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.358563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.358585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.358745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.358768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.358868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.358891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.359037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.359060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.359164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.359186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.359292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.359314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.359418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.359439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.359602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.359624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.359802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.359823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.359920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.359944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.360119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.360140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.360312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.360334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.360438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.360459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.360632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.360654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.360749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.360770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.361029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.361051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.361266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.361289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.361473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.361496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.361600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.361622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.361781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.361802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.361971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.361994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.362099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.362122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.362208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.362232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.362411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.362436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.362628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.362651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.362841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.362863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.363042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.363064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.363239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.363261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.363429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.363451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.000 qpair failed and we were unable to recover it. 00:31:45.000 [2024-12-05 14:03:27.363664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.000 [2024-12-05 14:03:27.363695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.363876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.363908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.364082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.364114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.364231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.364253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.364347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.364375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.364535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.364557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.364726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.364747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.364853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.364877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.364984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.365006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.365179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.365201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.365276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.365298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.365451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.365474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.365571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.365593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.365742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.365764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.365872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.365894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.366059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.366080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.366230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.366252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.366502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.366526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.366621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.366646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.366816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.366842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.367003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.001 [2024-12-05 14:03:27.367028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.001 qpair failed and we were unable to recover it. 00:31:45.001 [2024-12-05 14:03:27.367252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.367285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.367407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.367440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.367647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.367680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.367900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.367931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.368142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.368163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.368261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.368284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.368441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.368464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.368571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.368592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.368679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.368700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.368823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.368845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.369000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.369026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.369127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.369151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.369250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.369272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.369359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.369391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.369605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.369628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.369862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.369884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.370065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.370087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.370312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.370345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.370480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.370514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.370795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.370827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.371012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.371044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.371224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.371256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.371490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.371512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.371711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.371732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.371837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.371860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.372042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.372064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.372243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.372264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.372498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.372520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.372674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.372695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.372898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.372930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.373097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.373130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.373320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.373352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.373549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.373582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.373718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.373751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.002 [2024-12-05 14:03:27.373923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.002 [2024-12-05 14:03:27.373955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.002 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.374133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.374155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.374387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.374420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.374602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.374634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.374758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.374791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.374913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.374955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.375115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.375143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.375229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.375251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.375343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.375365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.375466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.375489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.375580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.375602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.375755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.375777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.375933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.375954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.376140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.376162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.376404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.376427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.376521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.376542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.376635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.376657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.376807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.376829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.376992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.377014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.377120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.377142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.377230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.377253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.377423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.377446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.377672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.377695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.377774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.377794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.377881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.377903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.378055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.378078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.378161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.378181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.378288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.378310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.378477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.378499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.378716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.378738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.378913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.378935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.379024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.379044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.379294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.379333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.379447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.379480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.379609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.003 [2024-12-05 14:03:27.379642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.003 qpair failed and we were unable to recover it. 00:31:45.003 [2024-12-05 14:03:27.379834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.379865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.379985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.380027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.380244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.380266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.380417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.380440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.380635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.380667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.380790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.380824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.381017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.381049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.381301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.381332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.381515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.381548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.381786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.381827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.382047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.382068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.382167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.382189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.382283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.382309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.382404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.382426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.382640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.382662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.382775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.382797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.382880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.382901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.383001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.383021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.383242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.383264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.383373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.383396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.383502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.383524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.383630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.383652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.383738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.383758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.383856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.383880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.384054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.384077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.384179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.384201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.384286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.384309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.384488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.384511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.384619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.384641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.384733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.384753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.384838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.384860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.385044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.385067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.385166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.385189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.385283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.385304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.385388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.385410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.385529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.385551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.385723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.385745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.004 [2024-12-05 14:03:27.385840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.004 [2024-12-05 14:03:27.385860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.004 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.385943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.385962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.386148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.386179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.386337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.386360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.386541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.386563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.386723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.386744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.386843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.386865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.387017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.387039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.387136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.387156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.387241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.387262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.387356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.387387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.387473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.387493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.387641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.387664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.387755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.387776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.387929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.387950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.388043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.388064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.388237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.388259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.388408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.388431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.388515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.388537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.388625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.388645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.388813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.388834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.388949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.388971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.389062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.389083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.389237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.389259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.389362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.389391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.389490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.389513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.389671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.389693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.389791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.389812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.389961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.389983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.390148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.390170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.390353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.390396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.390601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.390632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.390831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.390863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.390994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.391026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.391137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.391169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.391293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.391315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.391533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.391557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.391664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.391685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.391944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.391966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.005 qpair failed and we were unable to recover it. 00:31:45.005 [2024-12-05 14:03:27.392049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.005 [2024-12-05 14:03:27.392070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.392287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.392309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.392508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.392530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.392706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.392727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.392823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.392848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.392954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.392976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.393135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.393158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.393251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.393274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.393463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.393485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.393651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.393672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.393756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.393777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.393862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.393885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.394048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.394070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.394166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.394189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.394283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.394306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.394416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.394440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.394531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.394552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.394669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.394691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.394842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.394864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.394951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.394974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.395131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.395154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.395251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.395274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.395423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.395445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.395553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.395574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.395675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.395698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.395848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.395870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.395959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.395981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.396086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.396108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.396278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.396300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.396517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.396540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.396693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.396717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.396885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.396912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.397099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.006 [2024-12-05 14:03:27.397131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.006 qpair failed and we were unable to recover it. 00:31:45.006 [2024-12-05 14:03:27.397254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.397294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.397413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.397447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.397628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.397660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.397844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.397877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.397988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.398020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.398227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.398259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.398437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.398460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.398579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.398602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.398750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.398772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.398922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.398946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.399094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.399116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.399268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.399289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.399467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.399491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.399652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.399674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.399843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.399866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.400046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.400077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.400214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.400246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.400491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.400524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.400701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.400733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.400870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.400902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.401090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.401121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.401359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.401414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.401614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf3b20 is same with the state(6) to be set 00:31:45.007 [2024-12-05 14:03:27.401969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.402041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.402251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.402288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.402556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.402591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.402786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.402810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.403023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.403045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.403232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.403254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.403445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.403478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.403667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.403699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.403880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.403913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.404045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.404088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.404241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.404263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.404429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.404453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.404556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.404578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.404687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.404709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.404861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.404884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.405046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.007 [2024-12-05 14:03:27.405067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.007 qpair failed and we were unable to recover it. 00:31:45.007 [2024-12-05 14:03:27.405155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.405176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.405268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.405289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.405528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.405550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.405702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.405723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.405969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.406001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.406171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.406204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.406311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.406343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.406532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.406555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.406723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.406744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.406852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.406875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.407039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.407061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.407295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.407328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.407512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.407545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.407720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.407753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.407968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.408007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.408213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.408245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.408484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.408507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.408657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.408678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.408842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.408876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.409113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.409145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.409270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.409303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.409478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.409512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.409751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.409782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.409956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.409989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.410126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.410159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.410427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.410460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.410630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.410664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.410865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.410896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.411033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.411055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.411204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.411227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.411335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.411357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.411522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.411544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.411631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.411654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.411815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.411838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.412003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.412025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.412186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.412226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.412409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.412443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.412562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.412594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.008 [2024-12-05 14:03:27.412835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.008 [2024-12-05 14:03:27.412869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.008 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.413003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.413036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.413146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.413178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.413306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.413338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.413550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.413573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.413657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.413680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.413897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.413919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.414178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.414200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.414315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.414339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.414445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.414469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.414625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.414648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.414887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.414919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.415035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.415066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.415250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.415272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.415382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.415405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.415569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.415590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.415672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.415692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.415795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.415817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.416031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.416063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.416245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.416276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.416500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.416534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.416637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.416669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.416836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.416868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.416975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.417008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.417125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.417157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.417386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.417409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.417515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.417538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.417642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.417665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.417782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.417804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.417956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.417978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.418100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.418122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.418231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.418253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.418344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.418366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.418559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.418581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.418682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.418704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.418791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.418813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.418989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.419010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.419098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.419120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.009 [2024-12-05 14:03:27.419211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.009 [2024-12-05 14:03:27.419233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.009 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.419316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.419338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.419451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.419474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.419575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.419596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.419764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.419786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.419934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.419955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.420075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.420100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.420322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.420354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.420501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.420533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.420636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.420669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.420906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.420937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.421121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.421154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.421364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.421433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.421603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.421635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.421808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.421841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.422028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.422060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.422233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.422265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.422450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.422483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.422746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.422779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.422895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.422928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.423102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.423124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.423319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.423352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.423482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.423514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.423719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.423751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.423882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.423914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.424096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.424119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.424278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.424299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.424403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.424424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.424571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.424593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.424752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.424774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.424955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.424977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.425141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.425162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.425268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.425290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.425506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.425528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.010 [2024-12-05 14:03:27.425701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.010 [2024-12-05 14:03:27.425724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.010 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.425884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.425905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.426009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.426031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.426128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.426151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.426328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.426350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.426529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.426551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.426658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.426680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.426767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.426788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.426886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.426907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.427002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.427026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.427124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.427148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.427241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.427262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.427358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.427389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.427487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.427514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.427673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.427695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.427788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.427809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.427971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.428014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.428218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.428250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.428383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.428416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.428605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.428638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.428766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.428799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.428970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.429003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.429199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.429232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.429331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.429354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.429462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.429485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.429702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.429724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.429887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.429931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.430109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.430142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.430324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.430356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.430553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.430586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.430714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.430746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.431012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.431044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.431249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.431280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.431462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.431497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.431637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.431670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.431779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.431810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.432006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.432038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.432222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.432255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.432381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.011 [2024-12-05 14:03:27.432404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.011 qpair failed and we were unable to recover it. 00:31:45.011 [2024-12-05 14:03:27.432652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.432673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.432896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.432933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.433126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.433157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.433339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.433379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.433591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.433614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.433797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.433819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.433992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.434016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.434179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.434202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.434311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.434332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.434479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.434501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.434656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.434678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.434827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.434849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.434947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.434969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.435068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.435089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.435184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.435206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.435447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.435470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.435561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.435583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.435688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.435710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.435804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.435827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.435990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.436012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.436180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.436202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.436353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.436381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.436479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.436501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.436663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.436685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.436793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.436815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.436966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.436987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.437171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.437203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.437414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.437449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.437694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.437728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.437910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.437943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.438132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.438164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.438339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.438389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.438550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.438572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.438731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.438754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.438909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.438932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.439096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.439117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.439230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.439252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.439404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.439426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.012 [2024-12-05 14:03:27.439520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.012 [2024-12-05 14:03:27.439542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.012 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.439642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.439664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.439761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.439783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.439878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.439900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.440048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.440075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.440380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.440406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.440572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.440595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.440752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.440774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.440924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.440946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.441033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.441053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.441212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.441234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.441391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.441413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.441567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.441589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.441772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.441794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.442011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.442033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.442197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.442218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.442347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.442390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.442558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.442579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.442668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.442691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.442854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.442876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.442988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.443010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.443233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.443255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.443364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.443394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.443555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.443579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.443789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.443812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.443967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.443989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.444201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.444223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.444388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.444411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.444580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.444612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.444782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.444815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.445025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.445057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.445261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.445286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.445393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.445415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.445567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.445588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.445705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.445726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.445838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.445860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.446035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.446057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.446143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.446165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.446277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.013 [2024-12-05 14:03:27.446300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.013 qpair failed and we were unable to recover it. 00:31:45.013 [2024-12-05 14:03:27.446547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.446579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.446688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.446721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.446898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.446930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.447116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.447138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.447285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.447306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.447475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.447497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.447725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.447798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.448053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.448125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.448313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.448350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.448533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.448559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.448726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.448749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.448901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.448924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.449082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.449105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.449191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.449214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.449377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.449403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.449577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.449609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.449839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.449871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.449992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.450024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.450200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.450223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.450299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.450320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.450475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.450498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.450663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.450707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.450942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.450974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.451107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.451139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.451382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.451404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.451509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.451530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.451708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.451731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.451838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.451861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.452104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.452136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.452259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.452293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.452436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.452470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.452596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.452628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.452807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.452840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.453104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.453141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.453272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.453304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.453435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.453471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.453643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.453665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.453897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.453929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.014 [2024-12-05 14:03:27.454117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.014 [2024-12-05 14:03:27.454149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.014 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.454341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.454383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.454505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.454528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.454767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.454791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.454958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.454980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.455133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.455156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.455251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.455275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.455374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.455395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.455637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.455659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.455829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.455851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.455944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.455966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.456113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.456136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.456240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.456263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.456511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.456534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.456631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.456651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.456770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.456792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.456891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.456913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.457016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.457038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.457117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.457138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.457250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.457273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.457521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.457543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.457700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.457723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.457816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.457841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.457955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.457978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.458147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.458169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.458407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.458442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.458634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.458666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.458780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.458813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.458983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.459015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.459189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.459211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.459392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.459425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.459650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.459682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.459867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.459900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.460038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.460071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.015 qpair failed and we were unable to recover it. 00:31:45.015 [2024-12-05 14:03:27.460331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.015 [2024-12-05 14:03:27.460364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.460613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.460647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.460847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.460879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.461069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.461101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.461300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.461333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.461528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.461561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.461742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.461776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.461894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.461927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.462135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.462167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.462284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.462317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.462428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.462466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.462631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.462654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.462819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.462843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.463015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.463039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.463205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.463227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.463389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.463411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.463658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.463691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.463796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.463828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.463946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.463978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.464161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.464193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.464366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.464393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.464629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.464651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.464758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.464780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.464951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.464973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.465215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.465236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.465417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.465440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.465551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.465573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.465730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.465751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.465938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.465960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.466044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.466067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.466285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.466307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.466402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.466425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.466529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.466551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.466642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.466663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.466757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.466779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.466971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.466993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.467260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.467282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.467441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.467464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.467646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.467668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.467848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.467870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.468017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.468040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.468191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.016 [2024-12-05 14:03:27.468214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.016 qpair failed and we were unable to recover it. 00:31:45.016 [2024-12-05 14:03:27.468381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.468403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.468647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.468669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.468763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.468783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.468955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.468977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.469150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.469173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.469322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.469360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.469619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.469652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.469825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.469857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.469957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.469991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.470161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.470192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.470385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.470419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.470625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.470647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.470837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.470859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.470950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.470970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.471081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.471107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.471264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.471286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.471388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.471410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.471601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.471633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.471806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.471838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.471957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.471989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.472167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.472198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.472374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.472397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.472491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.472512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.472675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.472697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.472792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.472813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.472968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.472989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.473144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.473166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.473262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.473283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.473451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.473474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.473673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.473705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.473896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.473928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.474106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.474138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.474251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.474273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.474374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.474397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.474553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.474575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.474733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.474755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.474906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.474927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.475023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.475046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.475142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.475164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.475253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.475275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.475374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.017 [2024-12-05 14:03:27.475397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.017 qpair failed and we were unable to recover it. 00:31:45.017 [2024-12-05 14:03:27.475592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.475614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.475719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.475740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.475897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.475919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.476150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.476182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.476439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.476473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.476648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.476680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.476851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.476884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.477080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.477113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.477246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.477277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.477406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.477439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.477706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.477738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.477857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.477890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.478072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.478105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.478221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.478253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.478389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.478416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.478629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.478651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.478813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.478834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.478985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.479007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.479233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.479255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.479356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.479383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.479473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.479496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.479651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.479672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.479867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.479889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.480054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.480075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.480226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.480250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.480345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.480375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.480464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.480486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.480665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.480686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.480873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.480895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.481080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.481102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.481290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.481312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.481395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.481417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.481518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.481540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.481693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.481716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.481893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.481925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.482112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.482143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.482267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.482299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.482555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.482577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.482744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.482766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.482846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.482868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.483110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.483132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.483389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.018 [2024-12-05 14:03:27.483411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.018 qpair failed and we were unable to recover it. 00:31:45.018 [2024-12-05 14:03:27.483530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.483552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.483645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.483668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.483759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.483782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.483887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.483911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.483991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.484012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.484162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.484184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.484409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.484432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.484601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.484624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.484705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.484725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.484891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.484913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.485014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.485037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.485202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.485223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.485398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.485432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.485602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.485672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.485944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.485980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.486116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.486150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.486275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.486307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.486499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.486534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.486716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.486748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.486932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.486959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.487065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.487088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.487184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.487208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.487297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.487320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.487473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.487496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.487585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.487607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.487716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.487738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.487835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.487857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.487962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.487983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.488137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.488159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.488268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.488289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.488462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.488485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.488585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.488606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.488796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.488829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.488968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.488999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.489109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.489140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.489332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.489364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.489555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.489589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.489693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.489726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.019 [2024-12-05 14:03:27.489968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.019 [2024-12-05 14:03:27.490000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.019 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.490200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.490233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.490355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.490427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.490559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.490581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.490670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.490691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.490863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.490885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.491123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.491146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.491266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.491289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.491446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.491469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.491564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.491587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.491683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.491705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.491789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.491810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.491972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.491994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.492175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.492197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.492355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.492383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.492482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.492505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.492688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.492711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.492887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.492920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.493119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.493152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.493328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.493361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.493482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.493504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.493671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.493694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.493789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.493810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.493970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.493992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.494107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.494129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.494296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.494317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.494534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.494557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.494815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.494837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.495025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.495047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.495153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.020 [2024-12-05 14:03:27.495178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.020 qpair failed and we were unable to recover it. 00:31:45.020 [2024-12-05 14:03:27.495351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.495382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.495555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.495578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.495818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.495851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.496065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.496097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.496300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.496331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.496489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.496512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.496674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.496696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.496873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.496896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.497006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.497028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.497125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.497148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.497404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.497427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.497671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.497694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.497784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.497805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.497986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.498010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.498100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.498123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.498283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.498304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.498456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.498478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.498570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.498592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.498747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.498768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.498869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.498890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.498994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.499016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.499276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.499298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.499523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.499546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.499743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.499766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.499924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.499945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.500120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.500141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.500245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.500268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.500373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.500397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.500550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.500573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.500651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.500671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.500825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.500847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.501067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.501088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.501186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.501207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.501317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.501339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.501598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.501622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.501727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.501750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.501901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.501923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.502084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.021 [2024-12-05 14:03:27.502106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.021 qpair failed and we were unable to recover it. 00:31:45.021 [2024-12-05 14:03:27.502213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.502235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.502421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.502443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.502713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.502740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.502900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.502922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.503021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.503044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.503211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.503233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.503332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.503353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.503530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.503553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.503649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.503671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.503753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.503775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.503965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.503987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.504149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.504170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.504355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.504399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.504647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.504679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.504791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.504824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.504947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.504978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.505090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.505122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.505303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.505335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.505522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.505544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.505686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.505708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.505827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.505849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.506046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.506067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.506240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.506262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.506432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.506454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.506553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.506575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.506772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.506795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.506953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.506976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.507137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.507159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.507269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.507301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.507416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.507454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.507579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.507612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.507818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.507849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.507957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.507989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.508178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.508210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.508488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.508520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.508706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.508739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.508858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.508891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.509071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.509102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.022 [2024-12-05 14:03:27.509278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.022 [2024-12-05 14:03:27.509310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.022 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.509514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.509547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.509761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.509792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.509979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.510011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.510142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.510176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.510444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.510477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.510610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.510642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.510781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.510804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.510890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.510912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.511017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.511039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.511195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.511217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.511300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.511321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.511545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.511567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.511730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.511753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.511971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.511993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.512157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.512179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.512263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.512286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.512410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.512435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.512629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.512661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.512855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.512889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.513061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.513094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.513283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.513305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.513483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.513505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.513669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.513690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.513862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.513884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.513995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.514016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.514180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.514201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.514318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.514339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.514596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.514619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.514782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.514804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.514917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.514939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.515037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.515058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.515319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.515344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.515519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.515542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.515725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.515746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.515893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.515916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.516016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.516038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.516122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.516143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.516311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.023 [2024-12-05 14:03:27.516334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.023 qpair failed and we were unable to recover it. 00:31:45.023 [2024-12-05 14:03:27.516420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.024 [2024-12-05 14:03:27.516441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.024 qpair failed and we were unable to recover it. 00:31:45.024 [2024-12-05 14:03:27.516552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.024 [2024-12-05 14:03:27.516574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.024 qpair failed and we were unable to recover it. 00:31:45.024 [2024-12-05 14:03:27.516730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.024 [2024-12-05 14:03:27.516752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.024 qpair failed and we were unable to recover it. 00:31:45.024 [2024-12-05 14:03:27.516857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.024 [2024-12-05 14:03:27.516878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.024 qpair failed and we were unable to recover it. 00:31:45.024 [2024-12-05 14:03:27.516974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.024 [2024-12-05 14:03:27.516996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.024 qpair failed and we were unable to recover it. 00:31:45.024 [2024-12-05 14:03:27.517163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.024 [2024-12-05 14:03:27.517185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.024 qpair failed and we were unable to recover it. 00:31:45.024 [2024-12-05 14:03:27.517431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.024 [2024-12-05 14:03:27.517454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.024 qpair failed and we were unable to recover it. 00:31:45.024 [2024-12-05 14:03:27.517559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.024 [2024-12-05 14:03:27.517581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.024 qpair failed and we were unable to recover it. 00:31:45.024 [2024-12-05 14:03:27.517741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.024 [2024-12-05 14:03:27.517764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.024 qpair failed and we were unable to recover it. 00:31:45.024 [2024-12-05 14:03:27.517855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.024 [2024-12-05 14:03:27.517877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.024 qpair failed and we were unable to recover it. 00:31:45.024 [2024-12-05 14:03:27.518117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.024 [2024-12-05 14:03:27.518139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.024 qpair failed and we were unable to recover it. 00:31:45.024 [2024-12-05 14:03:27.518251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.024 [2024-12-05 14:03:27.518274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.024 qpair failed and we were unable to recover it. 00:31:45.348 [2024-12-05 14:03:27.518372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.348 [2024-12-05 14:03:27.518396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.348 qpair failed and we were unable to recover it. 00:31:45.348 [2024-12-05 14:03:27.518487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.348 [2024-12-05 14:03:27.518509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.348 qpair failed and we were unable to recover it. 00:31:45.348 [2024-12-05 14:03:27.518647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.348 [2024-12-05 14:03:27.518669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.348 qpair failed and we were unable to recover it. 00:31:45.348 [2024-12-05 14:03:27.518827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.348 [2024-12-05 14:03:27.518849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.348 qpair failed and we were unable to recover it. 00:31:45.348 [2024-12-05 14:03:27.518946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.348 [2024-12-05 14:03:27.518968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.348 qpair failed and we were unable to recover it. 00:31:45.348 [2024-12-05 14:03:27.519123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.348 [2024-12-05 14:03:27.519144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.348 qpair failed and we were unable to recover it. 00:31:45.348 [2024-12-05 14:03:27.519309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.348 [2024-12-05 14:03:27.519332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.519452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.519475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.519574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.519597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.519760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.519781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.519930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.519953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.520118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.520141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.520322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.520343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.520440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.520463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.520619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.520642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.520791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.520812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.520894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.520917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.521139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.521162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.521330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.521352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.521526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.521550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.521653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.521676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.521863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.521886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.522007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.522029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.522200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.522223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.522388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.522410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.522674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.522696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.522883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.522904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.523206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.523228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.523337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.523359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.523544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.523567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.523718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.523740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.523821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.523842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.523954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.523975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.524083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.524105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.524267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.524289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.524510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.524534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.524622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.524644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.524732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.524754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.524995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.525017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.525245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.349 [2024-12-05 14:03:27.525268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.349 qpair failed and we were unable to recover it. 00:31:45.349 [2024-12-05 14:03:27.525457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.525481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.525700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.525724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.525968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.525989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.526084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.526105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.526269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.526292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.526453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.526477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.526584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.526605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.526772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.526796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.526893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.526915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.527068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.527094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.527190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.527212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.527361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.527391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.527495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.527518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.527609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.527633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.527783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.527806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.528069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.528091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.528195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.528217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.528325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.528347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.528509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.528532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.528630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.528652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.528739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.528763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.528928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.528953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.529058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.529081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.529237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.529259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.529349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.529392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.529487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.529510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.529603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.529625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.529729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.529753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.529900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.529922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.530077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.530098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.530257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.530280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.530391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.530416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.530604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.530626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.530716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.530738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.530823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.530847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.530943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.530967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.531125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.531148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.350 qpair failed and we were unable to recover it. 00:31:45.350 [2024-12-05 14:03:27.531244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.350 [2024-12-05 14:03:27.531267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.531418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.531441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.531590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.531612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.531770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.531792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.531969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.531991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.532075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.532097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.532263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.532284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.532456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.532479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.532597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.532620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.532801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.532823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.532997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.533029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.533137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.533171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.533377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.533410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.533527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.533549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.533723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.533746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.533964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.533987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.534156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.534178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.534352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.534409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.534592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.534625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.534809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.534842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.534967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.535000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.535128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.535160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.535349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.535394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.535500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.535532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.535776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.535799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.535992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.536013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.536117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.536139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.536320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.536342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.536548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.536571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.536732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.536754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.536864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.536888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.537053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.537074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.537231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.537252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.537335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.537359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.351 qpair failed and we were unable to recover it. 00:31:45.351 [2024-12-05 14:03:27.537591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.351 [2024-12-05 14:03:27.537614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.537868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.537900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.538082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.538113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.538352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.538396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.538600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.538622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.538818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.538839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.539107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.539133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.539326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.539348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.539448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.539472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.539632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.539654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.539818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.539841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.540025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.540047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.540212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.540234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.540318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.540340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.540511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.540533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.540692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.540714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.540796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.540818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.540983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.541005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.541118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.541140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.541241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.541263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.541436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.541459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.541610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.541631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.541741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.541762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.541985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.542018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.542277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.542310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.542575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.542607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.542729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.542762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.542968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.543000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.543198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.543230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.543463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.543485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.543578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.543600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.543697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.543718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.543957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.543978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.544075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.544097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.544365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.544392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.544480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.544502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.544584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.544606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.544757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.352 [2024-12-05 14:03:27.544779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.352 qpair failed and we were unable to recover it. 00:31:45.352 [2024-12-05 14:03:27.544949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.544971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.545083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.545106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.545198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.545220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.545318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.545338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.545436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.545457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.545558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.545578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.545676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.545695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.545791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.545811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.545906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.545926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.546165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.546189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.546278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.546298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.546476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.546497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.546753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.546774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.546872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.546892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.547059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.547080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.547161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.547181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.547339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.547359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.547623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.547644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.547818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.547838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.548019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.548039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.548151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.548171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.548264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.548284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.548529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.548550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.548722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.548743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.548898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.548919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.549079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.549100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.549273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.549293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.549481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.549503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.549620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.549643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.353 [2024-12-05 14:03:27.549754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.353 [2024-12-05 14:03:27.549776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.353 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.549931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.549953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.550170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.550191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.550346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.550375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.550628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.550651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.550801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.550823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.550977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.550999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.551101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.551126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.551343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.551365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.551591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.551612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.551769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.551791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.551899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.551920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.552022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.552042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.552229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.552250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.552521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.552544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.552736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.552758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.553013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.553045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.553166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.553198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.553403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.553436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.553584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.553607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.553721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.553743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.553913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.553937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.554175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.554207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.554312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.554345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.554550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.554582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.554786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.554807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.554991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.555014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.555175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.555218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.555362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.555406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.555671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.555703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.555866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.555888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.556066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.556099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.556273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.556304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.556545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.556578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.556767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.556789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.354 [2024-12-05 14:03:27.556954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.354 [2024-12-05 14:03:27.556975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.354 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.557125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.557146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.557306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.557328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.557598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.557621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.557744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.557776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.557876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.557908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.558090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.558123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.558304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.558336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.558638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.558709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.558926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.558965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.559097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.559130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.559235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.559259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.559479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.559512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.559707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.559751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.560018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.560051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.560184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.560218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.560406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.560454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.560593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.560615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.560726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.560748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.560898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.560920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.561080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.561102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.561216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.561238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.561339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.561363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.561523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.561546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.561763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.561795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.561917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.561949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.562077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.562109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.562306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.562340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.562569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.562591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.562695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.562718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.562880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.562902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.563065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.563087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.563264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.563287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.563472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.563495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.563649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.563670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.563844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.563867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.563994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.564026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.564140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.355 [2024-12-05 14:03:27.564172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.355 qpair failed and we were unable to recover it. 00:31:45.355 [2024-12-05 14:03:27.564339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.564381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.564498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.564537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.564649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.564674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.564754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.564777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.564952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.564975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.565133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.565155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.565406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.565439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.565556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.565589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.565775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.565797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.565909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.565946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.566130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.566161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.566336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.566358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.566630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.566653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.566807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.566830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.566993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.567017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.567104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.567127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.567384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.567406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.567509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.567530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.567777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.567799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.567968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.567990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.568209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.568231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.568331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.568353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.568614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.568636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.568740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.568762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.568924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.568946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.569094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.569115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.569304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.569325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.569548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.569570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.569806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.569840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.570028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.570060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.570179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.570210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.570478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.570501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.570663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.570695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.570826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.570859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.571052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.356 [2024-12-05 14:03:27.571084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.356 qpair failed and we were unable to recover it. 00:31:45.356 [2024-12-05 14:03:27.571280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.571311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.571513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.571547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.571656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.571689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.571872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.571893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.571992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.572014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.572230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.572252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.572474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.572496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.572712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.572733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.572846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.572871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.572954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.572976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.573211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.573234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.573342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.573364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.573483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.573505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.573654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.573676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.573765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.573786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.573896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.573917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.574135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.574176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.574459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.574492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.574679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.574711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.574895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.574916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.575071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.575092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.575330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.575360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.575542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.575564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.575782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.575813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.576015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.576046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.576167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.576199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.576380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.576402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.576597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.576630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.576798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.576829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.576999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.577031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.577267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.577299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.577558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.577580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.577824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.577845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.578087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.578109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.578268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.578290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.578509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.578548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.578741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.578773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.578907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.357 [2024-12-05 14:03:27.578938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.357 qpair failed and we were unable to recover it. 00:31:45.357 [2024-12-05 14:03:27.579127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.579158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.579343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.579364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.579535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.579556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.579791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.579824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.580003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.580036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.580174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.580205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.580319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.580351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.580535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.580557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.580634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.580655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.580815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.580837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.580987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.581009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.581106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.581127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.581296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.581329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.581579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.581612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.581720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.581752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.581929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.581950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.582051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.582072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.582162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.582184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.582422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.582444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.582551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.582573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.582670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.582692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.582859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.582880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.583034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.583055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.583226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.583247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.583422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.583444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.583642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.583663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.583755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.583776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.583991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.584012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.584109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.584130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.584297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.584318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.584481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.584503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.584673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.584705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.584826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.584858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.584980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.358 [2024-12-05 14:03:27.585013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.358 qpair failed and we were unable to recover it. 00:31:45.358 [2024-12-05 14:03:27.585190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.585221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.585416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.585448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.585583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.585615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.585728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.585760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.585866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.585891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.585986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.586007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.586102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.586123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.586226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.586247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.586436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.586459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.586611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.586633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.586792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.586813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.586976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.586998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.587077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.587097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.587205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.587226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.587383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.587405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.587510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.587532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.587612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.587632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.587730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.587752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.587846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.587868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.587950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.587970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.588073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.588095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.588274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.588306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.588416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.588449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.588571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.588603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.588772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.588804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.588983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.589004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.589166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.589188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.589281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.589302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.589395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.589416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.589516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.589538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.589682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.589703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.589946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.589972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.590129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.590170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.590342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.590383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.590555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.590577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.590769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.590801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.590991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.591022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.591192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.591224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.359 [2024-12-05 14:03:27.591325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.359 [2024-12-05 14:03:27.591357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.359 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.591602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.591634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.591820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.591852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.592041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.592063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.592250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.592281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.592532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.592565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.592801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.592833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.593093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.593166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.593390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.593428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.593607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.593641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.593754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.593777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.593942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.593964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.594149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.594181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.594384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.594417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.594589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.594620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.594881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.594902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.595069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.595091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.595189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.595210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.595362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.595403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.595496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.595517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.595757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.595778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.595969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.595990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.596094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.596115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.596199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.596220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.596318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.596339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.596425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.596447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.596637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.596658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.596844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.596865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.596967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.596988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.597210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.597232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.597339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.597374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.597548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.597570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.597788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.597810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.597959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.597980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.598225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.598253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.598402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.598425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.598542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.598564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.598646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.598667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.360 [2024-12-05 14:03:27.598823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.360 [2024-12-05 14:03:27.598866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.360 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.599036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.599067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.599356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.599398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.599506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.599529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.599687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.599709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.599897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.599919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.600080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.600102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.600207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.600229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.600464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.600486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.600668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.600689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.600859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.600881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.601039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.601071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.601201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.601233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.601358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.601410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.601597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.601635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.601743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.601765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.601927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.601949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.602097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.602119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.602373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.602396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.602501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.602523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.602681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.602703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.602922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.602944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.603099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.603122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.603209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.603235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.603405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.603428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.603591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.603613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.603791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.603823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.604004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.604036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.604204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.604235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.604427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.604449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.604612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.604634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.604804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.604836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.604966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.604997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.605240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.605272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.605515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.605547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.605729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.605760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.606010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.606042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.606316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.606348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.606530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.606562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.361 qpair failed and we were unable to recover it. 00:31:45.361 [2024-12-05 14:03:27.606692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.361 [2024-12-05 14:03:27.606724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.606969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.607001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.607249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.607281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.607410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.607443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.607614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.607636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.607785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.607806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.607906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.607928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.608094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.608115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.608284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.608316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.608430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.608462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.608731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.608762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.608877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.608899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.608985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.609006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.609116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.609138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.609232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.609253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.609345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.609366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.609528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.609550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.609652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.609673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.609793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.609814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.609975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.609997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.610212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.610233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.610311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.610331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.610501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.610524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.610610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.610630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.610787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.610808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.610968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.610993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.611212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.611233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.611324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.611346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.611536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.611559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.611655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.611676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.611839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.611871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.611991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.612022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.612220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.612252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.612401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.612435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.612624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.612645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.612739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.612760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.612929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.612951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.613117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.362 [2024-12-05 14:03:27.613139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.362 qpair failed and we were unable to recover it. 00:31:45.362 [2024-12-05 14:03:27.613289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.613326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.613517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.613550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.613687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.613719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.613969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.614001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.614134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.614166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.614379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.614413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.614524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.614555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.614726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.614765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.614870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.614892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.615081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.615103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.615202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.615224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.615399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.615422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.615589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.615610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.615709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.615731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.615821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.615841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.616017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.616038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.616132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.616153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.616341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.616395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.616495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.616527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.616791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.616822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.616942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.616964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.617046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.617067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.617218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.617240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.617401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.617424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.617643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.617664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.617740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.617761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.617946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.617968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.618119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.618141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.618234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.618256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.618356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.618383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.618478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.618500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.618580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.618601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.618758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.618780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.618869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.618889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.363 qpair failed and we were unable to recover it. 00:31:45.363 [2024-12-05 14:03:27.619050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.363 [2024-12-05 14:03:27.619071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.619230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.619252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.619403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.619425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.619508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.619529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.619676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.619697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.619800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.619822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.620039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.620061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.620227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.620248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.620345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.620372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.620468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.620489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.620583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.620605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.620757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.620779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.620873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.620895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.621109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.621131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.621283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.621304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.621546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.621578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.621682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.621713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.621948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.621979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.622145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.622177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.622446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.622479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.622732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.622754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.622920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.622945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.623108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.623130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.623237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.623258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.623407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.623430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.623599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.623620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.623781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.623803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.623964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.623985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.624093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.624114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.624297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.624319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.624473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.624496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.624643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.624665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.624837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.624859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.624965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.624986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.625086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.625109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.625349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.625375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.625488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.625510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.625685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.625706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.625806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.364 [2024-12-05 14:03:27.625828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.364 qpair failed and we were unable to recover it. 00:31:45.364 [2024-12-05 14:03:27.625974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.625995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.626143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.626165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.626245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.626267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.626421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.626443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.626611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.626632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.626733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.626754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.626970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.626991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.627086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.627108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.627268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.627289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.627450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.627472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.627624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.627646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.627883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.627905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.628132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.628153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.628249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.628270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.628376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.628399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.628573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.628604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.628780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.628812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.629085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.629117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.629356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.629397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.629613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.629644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.629828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.629849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.629959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.629980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.630126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.630148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.630328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.630361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.630471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.630503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.630688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.630720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.630908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.630939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.631110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.631141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.631388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.631421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.631654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.631675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.631846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.631868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.631982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.632003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.632150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.632171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.632332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.632354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.632462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.632484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.632634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.632655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.632737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.632757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.632910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.632932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.365 qpair failed and we were unable to recover it. 00:31:45.365 [2024-12-05 14:03:27.633023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.365 [2024-12-05 14:03:27.633045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.633261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.633282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.633377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.633400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.633621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.633642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.633877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.633898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.634083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.634104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.634265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.634287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.634515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.634549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.634661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.634693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.634823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.634855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.635032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.635064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.635326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.635358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.635498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.635541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.635748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.635779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.635942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.635964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.636174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.636195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.636414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.636436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.636587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.636608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.636824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.636845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.636930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.636951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.637109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.637130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.637359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.637386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.637494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.637515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.637747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.637769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.637936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.637958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.638201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.638233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.638424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.638457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.638638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.638661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.638877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.638909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.639014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.639046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.639255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.639287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.639457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.639496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.639672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.639705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.639923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.639955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.640071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.640103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.640296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.640327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.640597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.640630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.640894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.640925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.366 [2024-12-05 14:03:27.641113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.366 [2024-12-05 14:03:27.641144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.366 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.641323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.641354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.641638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.641660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.641755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.641779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.641928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.641950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.642198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.642220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.642389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.642413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.642526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.642548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.642706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.642727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.642825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.642847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.643010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.643031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.643271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.643292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.643405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.643429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.643540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.643561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.643654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.643676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.643843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.643868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.644109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.644131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.644297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.644319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.644431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.644454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.644640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.644661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.644894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.644916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.645009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.645031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.645178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.645199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.645393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.645426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.645531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.645563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.645803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.645834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.645936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.645958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.646054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.646077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.646260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.646281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.646388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.646411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.646504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.646526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.646688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.646709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.646879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.646901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.646996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.647018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.647184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.367 [2024-12-05 14:03:27.647206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.367 qpair failed and we were unable to recover it. 00:31:45.367 [2024-12-05 14:03:27.647357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.647385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.647470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.647491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.647708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.647730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.647912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.647933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.648035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.648057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.648203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.648224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.648448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.648471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.648620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.648645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.648854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.648885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.649098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.649130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.649312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.649343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.649465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.649499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.649617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.649649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.649773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.649805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.650014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.650046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.650238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.650270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.650380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.650413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.650534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.650566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.650746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.650768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.650967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.650999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.651261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.651293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.651420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.651454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.651638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.651670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.651774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.651805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.652049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.652081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.652283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.652314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.652506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.652540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.652711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.652743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.652864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.652887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.653052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.653074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.653244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.653275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.653447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.653480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.653718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.653750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.653933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.653955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.654148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.654180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.654308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.654340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.654500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.654572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.654773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.368 [2024-12-05 14:03:27.654809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.368 qpair failed and we were unable to recover it. 00:31:45.368 [2024-12-05 14:03:27.655054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.655086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.655205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.655229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.655387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.655410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.655651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.655672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.655854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.655875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.656092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.656113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.656259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.656282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.656450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.656495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.656733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.656764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.656953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.656985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.657171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.657210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.657495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.657528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.657716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.657737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.657994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.658016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.658233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.658254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.658486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.658508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.658604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.658625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.658782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.658804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.659076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.659108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.659296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.659327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.659594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.659627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.659796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.659828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.659952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.659983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.660261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.660283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.660456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.660478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.660569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.660589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.660739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.660761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.660928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.660949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.661154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.661186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.661414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.661448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.661574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.661606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.661712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.661733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.661890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.661912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.662130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.662152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.662305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.662327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.662491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.662514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.662597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.662617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.662780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.662805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.369 [2024-12-05 14:03:27.663022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.369 [2024-12-05 14:03:27.663044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.369 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.663124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.663145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.663391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.663413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.663569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.663591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.663707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.663729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.664002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.664023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.664117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.664138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.664298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.664320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.664508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.664531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.664694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.664716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.664882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.664903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.665165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.665187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.665268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.665288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.665450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.665473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.665740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.665762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.665845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.665866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.665967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.665988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.666148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.666169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.666338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.666378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.666501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.666533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.666779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.666812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.667022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.667044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.667303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.667325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.667421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.667442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.667532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.667554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.667751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.667773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.667875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.667897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.668066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.668088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.668202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.668223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.668333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.668355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.668455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.668477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.668643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.668664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.668778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.668800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.668952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.668974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.669078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.669099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.669250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.669272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.669386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.669409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.669583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.669605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.669684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.370 [2024-12-05 14:03:27.669704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.370 qpair failed and we were unable to recover it. 00:31:45.370 [2024-12-05 14:03:27.669850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.669871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.669970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.669996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.670177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.670198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.670381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.670404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.670565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.670587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.670780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.670813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.670917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.670948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.671077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.671108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.671290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.671322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.671513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.671546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.671793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.671814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.671973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.671994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.672172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.672203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.672399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.672432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.672600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.672631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.672814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.672836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.673017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.673048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.673231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.673262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.673390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.673423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.673546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.673579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.673763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.673796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.673927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.673958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.674084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.674116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.674282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.674314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.674499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.674535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.674647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.674670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.674758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.674780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.675000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.675024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.675268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.675297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.675450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.675473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.675642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.675666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.675774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.675795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.675891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.675911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.676113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.676137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.676259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.676281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.676453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.676476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.676580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.676604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.676755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.676776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.371 [2024-12-05 14:03:27.676880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.371 [2024-12-05 14:03:27.676902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.371 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.677015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.677037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.677152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.677173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.677256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.677277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.677505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.677572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.677828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.677900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.678053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.678091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.678263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.678288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.678452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.678475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.678624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.678646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.678737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.678759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.678844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.678866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.679029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.679051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.679138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.679160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.679317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.679340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.679530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.679553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.679803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.679825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.679923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.679946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.680053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.680076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.680185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.680208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.680390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.680414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.680569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.680591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.680709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.680732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.680816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.680837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.681053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.681077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.681231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.681253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.681418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.681441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.681530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.681550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.681738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.681762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.681964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.681985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.682132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.682155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.682239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.682264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.682430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.682452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.682653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.682674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.372 [2024-12-05 14:03:27.682830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.372 [2024-12-05 14:03:27.682851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.372 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.683032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.683053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.683131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.683151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.683243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.683263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.683380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.683402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.683497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.683517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.683681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.683702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.683782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.683802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.683968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.683989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.684139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.684160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.684258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.684280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.684526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.684548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.684700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.684721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.684807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.684826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.684986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.685007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.685166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.685188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.685344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.685366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.685559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.685581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.685825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.685846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.685948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.685970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.686140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.686161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.686327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.686349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.686501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.686524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.686611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.686631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.686785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.686811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.686919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.686941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.687018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.687039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.687202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.687224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.687413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.687435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.687584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.687605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.687772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.687793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.687879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.687899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.688054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.688075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.688178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.688200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.688380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.688403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.688585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.688607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.688708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.688730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.688974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.688996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.689160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.373 [2024-12-05 14:03:27.689182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.373 qpair failed and we were unable to recover it. 00:31:45.373 [2024-12-05 14:03:27.689273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.689293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.689388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.689410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.689495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.689515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.689676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.689697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.689929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.689950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.690028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.690049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.690207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.690228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.690393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.690415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.690574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.690595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.690755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.690777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.690957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.690979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.691219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.691241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.691483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.691505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.691596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.691617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.691763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.691785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.691930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.691951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.692104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.692125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.692240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.692261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.692360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.692396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.692562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.692584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.692747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.692768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.692861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.692882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.693040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.693061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.693238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.693269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.693441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.693474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.693643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.693674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.693915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.693952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.694203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.694234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.694346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.694386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.694514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.694546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.694676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.694697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.694854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.694876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.694968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.694988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.695135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.695156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.695307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.695328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.695430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.695456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.695543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.695564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.695671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.695693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.374 [2024-12-05 14:03:27.695841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.374 [2024-12-05 14:03:27.695862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.374 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.696078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.696100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.696263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.696285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.696441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.696463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.696681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.696703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.696918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.696940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.697044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.697072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.697154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.697176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.697272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.697294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.697513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.697535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.697770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.697792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.697963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.697985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.698208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.698240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.698428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.698461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.698566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.698598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.698807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.698844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.698966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.698988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.699146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.699168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.699349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.699376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.699533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.699575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.699707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.699740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.699860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.699892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.700082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.700121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.700222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.700245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.700343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.700364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.700549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.700571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.700658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.700684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.700793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.700814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.700898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.700919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.701078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.701100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.701183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.701203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.701385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.701408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.701593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.701615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.701715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.701736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.701901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.701923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.702025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.702049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.702208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.702230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.702333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.702357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.702532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.375 [2024-12-05 14:03:27.702555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.375 qpair failed and we were unable to recover it. 00:31:45.375 [2024-12-05 14:03:27.702710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.702732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.702903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.702926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.703073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.703095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.703209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.703230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.703469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.703492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.703664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.703686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.703834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.703855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.703939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.703959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.704048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.704070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.704166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.704188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.704283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.704305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.704525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.704547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.704637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.704658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.704845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.704867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.704960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.704982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.705167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.705189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.705354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.705382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.705482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.705507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.705611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.705632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.705822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.705844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.705945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.705967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.706047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.706068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.706148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.706172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.706332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.706354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.706452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.706474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.706646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.706668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.706775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.706797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.706961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.706983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.707152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.707173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.707387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.707410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.707498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.707520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.707622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.707644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.707748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.707769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.708014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.376 [2024-12-05 14:03:27.708047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.376 qpair failed and we were unable to recover it. 00:31:45.376 [2024-12-05 14:03:27.708180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.708211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.708413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.708448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.708712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.708745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.708944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.708976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.709256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.709288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.709410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.709443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.709548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.709580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.709683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.709715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.709972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.710005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.710250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.710271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.710431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.710471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.710563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.710585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.710802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.710824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.710981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.711002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.711189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.711221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.711404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.711437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.711631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.711665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.711798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.711820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.711930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.711952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.712065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.712087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.712270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.712293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.712443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.712466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.712697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.712719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.712817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.712839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.712993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.713016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.713094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.713116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.713307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.713329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.713539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.713561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.713804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.713825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.713985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.714006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.714102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.714124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.714224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.714245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.714410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.714432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.714512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.714534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.714722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.714754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.714992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.715024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.715156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.715189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.715315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.715347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.715484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.715516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.715654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.377 [2024-12-05 14:03:27.715686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.377 qpair failed and we were unable to recover it. 00:31:45.377 [2024-12-05 14:03:27.715860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.715902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.716053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.716075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.716222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.716243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.716338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.716361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.716485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.716507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.716657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.716678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.716759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.716780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.716891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.716913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.717003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.717024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.717113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.717135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.717224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.717245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.717349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.717381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.717465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.717487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.717593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.717614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.717794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.717819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.717909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.717931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.718078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.718099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.718245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.718267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.718361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.718391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.718484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.718506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.718594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.718616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.718769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.718791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.718950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.718973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.719080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.719102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.719192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.719214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.719385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.719408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.719500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.719523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.719672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.719694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.719790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.719812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.719894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.719914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.720074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.720096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.720190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.720212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.720408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.720434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.720518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.720540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.720630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.720652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.720819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.378 [2024-12-05 14:03:27.720841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.378 qpair failed and we were unable to recover it. 00:31:45.378 [2024-12-05 14:03:27.721000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.721021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.721109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.721131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.721221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.721243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.721352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.721399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.721481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.721501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.721646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.721670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.721836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.721868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.722034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.722066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.722179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.722212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.722388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.722422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.722593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.722625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.722756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.722778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.722857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.722877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.723025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.723047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.723147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.723168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.723275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.723296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.723599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.723672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.723958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.724030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.724227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.724263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.724496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.724532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.724653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.724686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.724791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.724822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.724950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.724982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.725174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.725205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.725314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.725345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.725462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.725486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.725581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.725603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.725754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.725776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.725938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.725960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.726060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.726081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.726247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.726269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.726470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.726493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.726652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.726674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.726799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.726831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.727068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.727100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.727363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.727410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.727602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.727634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.727753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.727788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.727905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.379 [2024-12-05 14:03:27.727938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.379 qpair failed and we were unable to recover it. 00:31:45.379 [2024-12-05 14:03:27.728126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.728147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.728265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.728287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.728432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.728457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.728553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.728576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.728655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.728680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.728852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.728873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.729051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.729084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.729204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.729235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.729411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.729444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.729576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.729608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.729787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.729819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.729922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.729954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.730130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.730162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.730341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.730384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.730516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.730548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.730661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.730692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.730806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.730836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.730955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.730987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.731107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.731140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.731243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.731264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.731366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.731394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.731541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.731562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.731658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.731679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.731826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.731847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.732006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.732050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.732174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.732206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.732392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.732427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.732538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.732571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.732700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.732732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.732862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.732895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.733075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.733108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.733290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.733315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.733533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.733556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.733738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.733759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.733849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.733870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.380 qpair failed and we were unable to recover it. 00:31:45.380 [2024-12-05 14:03:27.734088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.380 [2024-12-05 14:03:27.734110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.734272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.734295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.734394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.734416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.734576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.734597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.734831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.734853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.734946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.734967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.735143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.735165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.735332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.735354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.735457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.735479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.735645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.735666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.735840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.735872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.735978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.736009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.736112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.736144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.736261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.736293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.736413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.736446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.736550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.736583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.736764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.736796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.736973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.737005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.737107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.737139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.737398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.737471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.737677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.737712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.737827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.737859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.737968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.737992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.738077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.738099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.738250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.738271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.738383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.738405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.738570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.738592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.738752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.738773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.738881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.738901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.738992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.739014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.739169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.739191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.739274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.739295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.739494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.739516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.739608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.739630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.739782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.739803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.739892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.739914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.740003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.740025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.740128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.740151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.740316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.381 [2024-12-05 14:03:27.740337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.381 qpair failed and we were unable to recover it. 00:31:45.381 [2024-12-05 14:03:27.740496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.740518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.740615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.740636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.740741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.740763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.740925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.740947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.741115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.741136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.741239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.741261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.741364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.741406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.741559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.741581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.741736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.741757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.741854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.741875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.741978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.742000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.742167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.742188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.742288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.742310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.742463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.742485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.742664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.742686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.742786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.742808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.742978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.742999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.743088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.743110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.743263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.743284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.743376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.743399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.743636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.743657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.743828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.743850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.743949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.743970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.744061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.744082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.744235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.744256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.744383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.744412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.744499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.744521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.744689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.744710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.744807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.744829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.744920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.744942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.745068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.745090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.745271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.745292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.745493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.745526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.745662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.382 [2024-12-05 14:03:27.745693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.382 qpair failed and we were unable to recover it. 00:31:45.382 [2024-12-05 14:03:27.745824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.745857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.745962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.745993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.746288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.746358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.746581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.746618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.746793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.746825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.746999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.747023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.747117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.747139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.747224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.747245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.747400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.747423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.747509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.747531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.747636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.747657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.747757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.747778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.747870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.747892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.747983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.748005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.748091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.748113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.748220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.748242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.748338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.748360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.748514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.748536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.748622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.748644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.748737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.748759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.748849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.748871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.748955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.748977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.749133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.749154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.749381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.749404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.749487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.749509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.749686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.749707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.749970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.750001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.750124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.750156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.750272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.750304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.750420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.750453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.750587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.750619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.750814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.750836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.751021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.751056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.751191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.751223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.751334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.751378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.751575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.751608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.751728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.751759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.751869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.751901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.752021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.752045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.752162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.752184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.752333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.752354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.752450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.752472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.752569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.752591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.752740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.752762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.752917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.752939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.753096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.753117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.753206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.753228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.383 qpair failed and we were unable to recover it. 00:31:45.383 [2024-12-05 14:03:27.753311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.383 [2024-12-05 14:03:27.753335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.753427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.753449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.753536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.753557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.753642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.753664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.753827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.753849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.753932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.753954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.754035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.754056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.754159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.754180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.754276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.754297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.754451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.754474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.754634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.754656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.754831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.754852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.754938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.754964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.755119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.755140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.755228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.755249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.755339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.755361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.755454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.755476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.755560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.755581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.755682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.755702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.755794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.755815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.755915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.755937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.756030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.756052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.756147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.756168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.756323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.756345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.756445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.756478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.756559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.756581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.756665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.756686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.756860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.756882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.756967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.756988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.757201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.757223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.757306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.757327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.757416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.757438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.757540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.757562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.757655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.757676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.757782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.757803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.757885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.757906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.757985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.758006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.758104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.758125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.758208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.758229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.758325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.758351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.758515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.758537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.758686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.758708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.758860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.758882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.758971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.384 [2024-12-05 14:03:27.758993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.384 qpair failed and we were unable to recover it. 00:31:45.384 [2024-12-05 14:03:27.759156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.759188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.759292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.759324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.759439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.759472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.759645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.759677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.759791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.759823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.760021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.760042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.760131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.760153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.760242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.760263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.760341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.760362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.760590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.760613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.760711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.760732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.760833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.760854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.760962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.760983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.761077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.761098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.761187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.761208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.761312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.761334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.761423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.761445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.761597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.761617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.761691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.761712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.761812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.761833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.761928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.761950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.762115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.762136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.762233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.762254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.762339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.762361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.762467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.762489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.762575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.762599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.762855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.762877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.763030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.763051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.763203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.763225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.763324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.763345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.763435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.763457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.763539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.763560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.763640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.763661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.763751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.763772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.763863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.763885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.764039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.764060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.764219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.764245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.764421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.764443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.385 qpair failed and we were unable to recover it. 00:31:45.385 [2024-12-05 14:03:27.764545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.385 [2024-12-05 14:03:27.764567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.764652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.764674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.764885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.764907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.764999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.765020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.765112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.765134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.765304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.765325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.765441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.765463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.765560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.765582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.765676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.765697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.765863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.765885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.765991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.766028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.766139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.766171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.766282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.766313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.766497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.766530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.766636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.766667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.766903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.766924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.767004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.767025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.767116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.767137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.767283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.767304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.767547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.767570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.767653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.767674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.767765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.767787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.767946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.767967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.768135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.768167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.768278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.768310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.768527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.768565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.768684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.768716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.768819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.768850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.768960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.768991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.769217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.769249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.769489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.769512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.769727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.769748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.769858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.769880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.769984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.770004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.770096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.770117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.770208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.770230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.770331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.770352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.770522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.770562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.770764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.770794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.770966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.771037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.771171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.771206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.771330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.771363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.771493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.771526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.771694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.771730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.771921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.771953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.772091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.772122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.772290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.772329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.772513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.386 [2024-12-05 14:03:27.772546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.386 qpair failed and we were unable to recover it. 00:31:45.386 [2024-12-05 14:03:27.772663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.772695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.772802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.772834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.772955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.772986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.773201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.773234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.773337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.773391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.773561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.773592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.773688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.773712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.773864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.773886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.773983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.774004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.774100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.774122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.774202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.774223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.774404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.774426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.774506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.774528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.774752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.774775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.774932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.774954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.775061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.775082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.775162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.775184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.775292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.775314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.775483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.775525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.775636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.775669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.775928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.775960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.776132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.776153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.776326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.776358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.776572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.776605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.776737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.776769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.777031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.777063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.777177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.777209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.777414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.777447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.777565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.777596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.777834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.777866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.778044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.778075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.778269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.778300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.778421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.778455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.778730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.778762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.778942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.778974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.779147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.779168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.779349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.779376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.779523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.779563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.779687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.779718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.779889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.779921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.780028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.780070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.780150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.780171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.780277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.780299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.780391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.780412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.780499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.780519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.780682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.780711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.780812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.780833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.780918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.780939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.781041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.781063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.781163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.781184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.781284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.387 [2024-12-05 14:03:27.781306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.387 qpair failed and we were unable to recover it. 00:31:45.387 [2024-12-05 14:03:27.781492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.781515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.781596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.781617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.781768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.781789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.781876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.781897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.781997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.782020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.782118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.782140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.782226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.782246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.782342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.782363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.782480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.782502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.782607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.782629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.782849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.782870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.782968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.782989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.783069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.783090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.783250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.783271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.783364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.783396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.783491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.783513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.783595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.783615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.783695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.783716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.783826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.783848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.784006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.784028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.784245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.784267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.784374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.784401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.784497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.784517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.784613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.784635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.784718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.784738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.784839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.784860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.785007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.785028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.785271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.785292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.785452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.785474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.785700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.785722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.785815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.785836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.785992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.786014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.786108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.786130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.786220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.786240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.786320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.786340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.786450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.786473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.786553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.786574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.786669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.786691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.786910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.786931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.787117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.787138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.787253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.787274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.787421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.787443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.787538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.787562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.787655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.787676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.787775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.787797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.787967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.787988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.788076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.788096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.788250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.788271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.788421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.788443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.788543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.788564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.788710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.788731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.388 qpair failed and we were unable to recover it. 00:31:45.388 [2024-12-05 14:03:27.788834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.388 [2024-12-05 14:03:27.788856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.788962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.788984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.789167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.789198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.789311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.789343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.789524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.789556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.789679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.789710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.789817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.789850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.790108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.790130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.790355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.790388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.790481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.790502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.790667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.790687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.790784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.790809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.790924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.790946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.791028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.791049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.791217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.791238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.791395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.791417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.791585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.791607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.791702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.791723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.791939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.791960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.792052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.792074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.792152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.792174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.792267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.792288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.792446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.792468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.792619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.792641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.792789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.792810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.792919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.792941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.793099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.793120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.793219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.793240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.793408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.793430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.793582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.793604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.793697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.793718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.793804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.793826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.793930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.793951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.794100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.794122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.794280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.794301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.794455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.794479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.794629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.794650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.794837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.794859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.795012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.795038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.795122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.795144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.795301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.795322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.795540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.795562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.795732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.795763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.796027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.796059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.796164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.796195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.796377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.796399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.389 [2024-12-05 14:03:27.796576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.389 [2024-12-05 14:03:27.796598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.389 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.796766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.796787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.796951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.796972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.797085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.797107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.797263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.797284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.797384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.797406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.797568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.797635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.797869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.797939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.798203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.798239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.798363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.798411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.798591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.798623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.798737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.798770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.798958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.798989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.799211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.799244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.799390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.799424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.799531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.799557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.799714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.799736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.799889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.799911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.800065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.800089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.800264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.800296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.800432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.800465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.800666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.800698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.800828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.800862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.801038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.801070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.801180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.801214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.801359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.801413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.801535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.801567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.801702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.801734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.801862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.801894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.802073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.802105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.802208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.802229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.802316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.802339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.802452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.802475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.802593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.802628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.802824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.802856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.803044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.803076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.803202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.803233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.803336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.803377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.803570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.803604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.803720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.803744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.803841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.803864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.803947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.803970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.804138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.804160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.804408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.804430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.804515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.804536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.804630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.804653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.804817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.804838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.804992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.805015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.805108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.805131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.805295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.805319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.805439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.805464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.805623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.805664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.805796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.805830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.390 [2024-12-05 14:03:27.805952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.390 [2024-12-05 14:03:27.805984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.390 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.806156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.806196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.806381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.806404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.806510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.806532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.806686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.806707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.806865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.806887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.807059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.807080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.807173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.807199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.807312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.807334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.807436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.807458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.807613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.807635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.807796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.807818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.807968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.807988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.808086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.808107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.808192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.808212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.808296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.808319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.808406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.808428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.808524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.808545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.808650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.808673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.808766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.808788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.808868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.808890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.808995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.809016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.809190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.809211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.809382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.809407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.809499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.809521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.809712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.809734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.809903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.809926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.810022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.810043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.810129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.810151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.810239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.810260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.810354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.810381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.810487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.810508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.810602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.810623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.810765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.810787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.810868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.810893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.810981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.811002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.811109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.811130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.811305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.811327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.811421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.811444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.811555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.811579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.811673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.811694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.811772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.811792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.811869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.811890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.811987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.812008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.812095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.812117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.812220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.812249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.812331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.812352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.812468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.812491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.812636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.812699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.812828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.812865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.813041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.391 [2024-12-05 14:03:27.813074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.391 qpair failed and we were unable to recover it. 00:31:45.391 [2024-12-05 14:03:27.813257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.813280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.813390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.813413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.813569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.813591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.813687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.813708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.813863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.813886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.813976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.813999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.814102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.814125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.814223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.814246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.814407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.814430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.814592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.814615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.814719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.814741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.814896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.814918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.815102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.815124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.815272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.815293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.815380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.815402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.815562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.815584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.815689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.815711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.815827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.815848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.815931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.815953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.816035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.816057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.816159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.816181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.816260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.816280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.816378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.816401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.816489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.816510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.816614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.816639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.816811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.816833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.816925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.816946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.817108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.817131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.817280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.817301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.817479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.817501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.817653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.817675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.817831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.817853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.818004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.818026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.818188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.818210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.818358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.818388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.818481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.818503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.818661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.818683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.818778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.818798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.818968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.818990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.819231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.819253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.819358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.819395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.819547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.819569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.392 [2024-12-05 14:03:27.819787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.392 [2024-12-05 14:03:27.819809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.392 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.819910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.819932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.820021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.820042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.820255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.820327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.820588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.820658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.820782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.820805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.820911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.820933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.821173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.821195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.821291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.821312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.821412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.821439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.821593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.821616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.821786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.821808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.821903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.821926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.822076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.822098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.822190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.822212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.822293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.822314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.822413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.822435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.822650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.822671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.822770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.822793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.822946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.822967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.823137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.823158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.823312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.823335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.823419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.823441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.823542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.823564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.823712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.823735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.823977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.823999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.824110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.824133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.824222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.824243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.824404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.824426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.824589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.824611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.824832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.824854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.824963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.824985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.825084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.825106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.825253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.393 [2024-12-05 14:03:27.825275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.393 qpair failed and we were unable to recover it. 00:31:45.393 [2024-12-05 14:03:27.825382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.825405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.825610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.825632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.825725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.825747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.825901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.825923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.826033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.826056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.826159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.826181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.826269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.826291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.826389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.826411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.826572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.826593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.826811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.826832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.826932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.826954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.827105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.827126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.827227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.827249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.827412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.827434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.827622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.827644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.827859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.827882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.828049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.828075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.828226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.828249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.828399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.828422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.828520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.828542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.828622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.828643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.828799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.828820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.828970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.828991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.829074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.829096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.829317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.829338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.829473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.829496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.829652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.829674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.829889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.829911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.830088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.830110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.830265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.830286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.830383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.830409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.830655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.830677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.830779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.830802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.830886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.830907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.831000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.831021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.831117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.831138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.831356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.831403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.394 qpair failed and we were unable to recover it. 00:31:45.394 [2024-12-05 14:03:27.831499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.394 [2024-12-05 14:03:27.831522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.831601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.831623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.831731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.831753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.831913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.831935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.832081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.832103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.832188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.832209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.832356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.832393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.832497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.832519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.832622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.832645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.832728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.832750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.832914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.832937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.833100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.833122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.833280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.833303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.833536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.833560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.833734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.833755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.833849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.833871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.834088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.834109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.834267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.834288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.834510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.834533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.834638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.834660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.834755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.834777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.834865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.834888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.835009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.835031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.835121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.835142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.835297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.835319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.835404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.835426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.835507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.835529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.835623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.835646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.835737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.835759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.835844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.835865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.836056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.836078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.836173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.836195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.836350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.836381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.836557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.836580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.836694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.836717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.395 [2024-12-05 14:03:27.836870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.395 [2024-12-05 14:03:27.836891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.395 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.837064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.837086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.837237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.837258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.837345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.837374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.837457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.837479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.837635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.837657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.837807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.837829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.837928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.837949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.838191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.838213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.838360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.838399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.838569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.838591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.838686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.838710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.838821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.838847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.839007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.839029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.839131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.839154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.839322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.839345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.839588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.839660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.839802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.839838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.840031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.840065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.840188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.840214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.840385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.840407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.840517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.840539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.840647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.840670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.840829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.840850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.841018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.841040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.841193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.841215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.841381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.841404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.841514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.841537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.841754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.841777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.841872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.841894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.842007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.842029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.842179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.842200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.842291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.842312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.842472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.842496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.842615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.842636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.842785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.842807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.842906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.842928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.843023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.843045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.843267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.843289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.396 qpair failed and we were unable to recover it. 00:31:45.396 [2024-12-05 14:03:27.843380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.396 [2024-12-05 14:03:27.843408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.843567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.843588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.843752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.843774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.843865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.843887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.844128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.844149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.844310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.844331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.844509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.844532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.844680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.844701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.844865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.844886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.845049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.845071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.845289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.845310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.845429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.845451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.845562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.845584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.845739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.845760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.845939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.845960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.846045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.846066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.846283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.846304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.846408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.846430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.846529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.846551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.846696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.846717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.846892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.846914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.847065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.847086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.847305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.847326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.847499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.847521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.847678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.847702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.847867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.847888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.847969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.847990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.848165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.848187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.848281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.848303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.848520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.848542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.848698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.848720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.848889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.848910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.849025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.397 [2024-12-05 14:03:27.849047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.397 qpair failed and we were unable to recover it. 00:31:45.397 [2024-12-05 14:03:27.849286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.849307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.849399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.849422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.849514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.849538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.849706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.849728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.849896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.849919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.850071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.850094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.850275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.850296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.850398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.850422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.850578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.850603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.850857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.850878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.851149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.851172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.851324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.851346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.851448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.851472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.851715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.851737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.851820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.851841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.851960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.851981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.852074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.852095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.852261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.852283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.852466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.852489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.852591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.852613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.852696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.852718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.852892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.852914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.853165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.853188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.853289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.853313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.853464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.853486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.853585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.853606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.853755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.853777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.853935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.853957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.854046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.854070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.854167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.854188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.854361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.854390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.854550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.854572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.854727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.854750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.854950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.854973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.855077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.855099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.855211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.855237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.855473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.398 [2024-12-05 14:03:27.855496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.398 qpair failed and we were unable to recover it. 00:31:45.398 [2024-12-05 14:03:27.855588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.855612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.855725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.855748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.855932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.855955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.856054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.856077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.856158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.856182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.856327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.856349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.856522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.856543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.856714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.856737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.856892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.856914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.857021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.857043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.857127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.857149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.857245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.857269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.857511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.857535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.857634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.857656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.857769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.857791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.857881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.857903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.857996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.858017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.858166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.858188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.858403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.858426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.858542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.858564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.858763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.858786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.858976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.859000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.859159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.859181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.859266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.859288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.859399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.859421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.859574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.859597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.859761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.859784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.859888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.859911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.860017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.860039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.860140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.860162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.860382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.860405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.860489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.860511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.860745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.860766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.860927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.860949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.861034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.861056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.861208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.861229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.861460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.861482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.399 [2024-12-05 14:03:27.861665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.399 [2024-12-05 14:03:27.861687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.399 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.861770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.861791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.861941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.861966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.862129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.862152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.862251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.862274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.862374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.862396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.862500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.862521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.862740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.862762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.862864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.862885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.862995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.863017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.863190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.863212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.863403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.863426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.863540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.863562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.863777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.863798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.863893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.863915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.864068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.864090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.864246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.864268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.864482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.864505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.864655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.864678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.864894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.864917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.865069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.865092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.865238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.865259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.865427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.865449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.865558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.865592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.865685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.865707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.865820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.865841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.866069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.866091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.866191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.866213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.866311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.866332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.866494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.866522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.866742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.866764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.866917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.866939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.867094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.867115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.867223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.867244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.867343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.867364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.867533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.867554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.867714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.867737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.400 [2024-12-05 14:03:27.867846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.400 [2024-12-05 14:03:27.867869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.400 qpair failed and we were unable to recover it. 00:31:45.401 [2024-12-05 14:03:27.868019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.401 [2024-12-05 14:03:27.868040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.401 qpair failed and we were unable to recover it. 00:31:45.401 [2024-12-05 14:03:27.868123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.401 [2024-12-05 14:03:27.868143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.401 qpair failed and we were unable to recover it. 00:31:45.401 [2024-12-05 14:03:27.868324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.401 [2024-12-05 14:03:27.868345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.401 qpair failed and we were unable to recover it. 00:31:45.401 [2024-12-05 14:03:27.868510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.401 [2024-12-05 14:03:27.868533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.401 qpair failed and we were unable to recover it. 00:31:45.401 [2024-12-05 14:03:27.868751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.401 [2024-12-05 14:03:27.868773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.401 qpair failed and we were unable to recover it. 00:31:45.401 [2024-12-05 14:03:27.868877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.401 [2024-12-05 14:03:27.868898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.401 qpair failed and we were unable to recover it. 00:31:45.401 [2024-12-05 14:03:27.869122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.401 [2024-12-05 14:03:27.869143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.401 qpair failed and we were unable to recover it. 00:31:45.401 [2024-12-05 14:03:27.869366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.401 [2024-12-05 14:03:27.869401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.401 qpair failed and we were unable to recover it. 00:31:45.401 [2024-12-05 14:03:27.869495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.401 [2024-12-05 14:03:27.869519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.401 qpair failed and we were unable to recover it. 00:31:45.401 [2024-12-05 14:03:27.869612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.401 [2024-12-05 14:03:27.869633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.401 qpair failed and we were unable to recover it. 00:31:45.401 [2024-12-05 14:03:27.869849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.869871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.870052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.870074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.870174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.870196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.870410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.870433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.870525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.870546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.870640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.870661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.870827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.870849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.871019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.871041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.871197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.871219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.871409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.871433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.871547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.871568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.871731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.871753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.871918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.871940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.872036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.872058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.872208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.872229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.872343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.872371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.872506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.872531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.872698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.872721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.872825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.872848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.873022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.873045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.873204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.873225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.873335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.873357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.873523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.873549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.873644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.873666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.873851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.873874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.874056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.874079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.874177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.874199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.874303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.874326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.699 qpair failed and we were unable to recover it. 00:31:45.699 [2024-12-05 14:03:27.874433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.699 [2024-12-05 14:03:27.874456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.874547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.874568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.874661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.874682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.874765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.874787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.874878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.874899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.875006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.875027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.875191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.875213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.875316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.875337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.875522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.875545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.875726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.875749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.875863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.875885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.876033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.876054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.876140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.876162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.876333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.876358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.876536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.876557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.876655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.876675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.876818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.876841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.876973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.876994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.877149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.877169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.877264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.877284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.877387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.877408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.877598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.877620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.877863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.877886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.877992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.878015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.878178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.878200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.878298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.878319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.878492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.878514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.878675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.878697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.878786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.878806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.878913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.878935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.879118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.879139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.879305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.879326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.879516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.879539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.879642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.879665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.879817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.879838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.700 [2024-12-05 14:03:27.880017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.700 [2024-12-05 14:03:27.880090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.700 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.880309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.880344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.880576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.880611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.880727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.880752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.880871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.880893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.881046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.881069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.881156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.881177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.881309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.881331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.881450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.881472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.881632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.881653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.881781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.881804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.881915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.881938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.882043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.882064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.882230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.882252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.882356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.882386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.882544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.882567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.882668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.882690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.882889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.882910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.883150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.883172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.883334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.883355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.883537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.883561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.883644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.883665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.883815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.883837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.883934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.883955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.884127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.884148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.884306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.884328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.884462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.884484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.884634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.884660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.884808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.884832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.884913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.884933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.885023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.885045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.885129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.885151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.885308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.885330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.885495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.885519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.885621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.885644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.885807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.885828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.885932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.885953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.886132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.886154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.701 qpair failed and we were unable to recover it. 00:31:45.701 [2024-12-05 14:03:27.886333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.701 [2024-12-05 14:03:27.886355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.886517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.886540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.886693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.886716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.886873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.886894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.886998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.887020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.887111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.887132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.887222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.887242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.887395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.887419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.887507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.887527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.887616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.887636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.887742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.887763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.887912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.887934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.888104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.888126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.888288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.888310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.888392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.888415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.888509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.888530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.888612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.888636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.888789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.888810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.888970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.888991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.889171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.889192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.889410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.889433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.889598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.889620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.889698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.889719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.889892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.889913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.890081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.890103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.890213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.890236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.890504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.890526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.890623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.890645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.890798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.890820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.890900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.890921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.891012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.891034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.891114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.891135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.891223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.891246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.891449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.891472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.891646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.891668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.891906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.891928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.892094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.702 [2024-12-05 14:03:27.892116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.702 qpair failed and we were unable to recover it. 00:31:45.702 [2024-12-05 14:03:27.892213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.892236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.892385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.892408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.892502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.892523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.892671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.892692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.892857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.892881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.893050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.893072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.893253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.893275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.893442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.893467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.893661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.893683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.893855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.893877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.894037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.894059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.894172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.894193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.894412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.894434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.894524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.894545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.894710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.894731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.894832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.894854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.894947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.894969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.895137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.895160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.895243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.895265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.895427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.895449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.895538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.895565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.895738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.895759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.895844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.895864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.896042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.896064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.896170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.896193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.896357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.896386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.896541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.896563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.896659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.896682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.896830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.896851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.896956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.896977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.897138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.897159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.897330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.897351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.897450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.897473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.703 [2024-12-05 14:03:27.897640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.703 [2024-12-05 14:03:27.897661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.703 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.897748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.897771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.897937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.897959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.898049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.898070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.898237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.898259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.898357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.898385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.898536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.898558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.898727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.898749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.898839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.898862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.899013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.899035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.899198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.899220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.899314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.899335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.899499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.899521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.899739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.899760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.899850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.899872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.900029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.900051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.900134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.900154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.900328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.900350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.900450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.900473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.900640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.900663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.900754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.900775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.900874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.900895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.900999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.901021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.901201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.901223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.901379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.901401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.901508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.901530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.901614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.901635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.901788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.901809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.902034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.902107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.902321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.902357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.902569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.902605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.902704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.902728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.902818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.902840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.902940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.902962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.903115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.903137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.704 [2024-12-05 14:03:27.903354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.704 [2024-12-05 14:03:27.903393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.704 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.903509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.903532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.903699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.903721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.903941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.903962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.904059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.904080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.904247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.904321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.904486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.904525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.904648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.904673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.904856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.904879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.905009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.905031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.905197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.905218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.905298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.905321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.905431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.905454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.905623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.905645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.905906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.905930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.906016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.906039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.906207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.906230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.906336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.906358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.906519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.906541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.906722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.906744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.906892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.906918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.907074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.907096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.907198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.907221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.907332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.907354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.907522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.907544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.907643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.907665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.907792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.907813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.907921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.907943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.908103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.908125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.908229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.908252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.908415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.908437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.908543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.908566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.908672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.908694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.908911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.908932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.909029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.909053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.909217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.909238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.909322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.909342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.909496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.909519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.909623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.705 [2024-12-05 14:03:27.909644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.705 qpair failed and we were unable to recover it. 00:31:45.705 [2024-12-05 14:03:27.909809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.909830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.909925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.909946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.910054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.910076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.910160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.910182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.910328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.910350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.910533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.910556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.910663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.910685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.910898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.910919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.911020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.911046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.911195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.911217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.911310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.911332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.911508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.911531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.911711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.911732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.911947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.911969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.912136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.912159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.912306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.912328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.912443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.912466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.912625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.912647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.912829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.912851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.913066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.913088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.913241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.913263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.913364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.913410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.913527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.913567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.913767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.913803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.913933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.913966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.914094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.914128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.914340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.914381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.914490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.914522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.914646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.914672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.914829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.914852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.914953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.914975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.915143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.915164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.915244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.915267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.915433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.915454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.915621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.915643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.915737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.915758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.915987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.916010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.916161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.916185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.706 qpair failed and we were unable to recover it. 00:31:45.706 [2024-12-05 14:03:27.916351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.706 [2024-12-05 14:03:27.916394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.916558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.916580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.916742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.916765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.916925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.916947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.917048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.917071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.917224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.917245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.917355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.917385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.917482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.917504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.917597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.917619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.917775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.917796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.917891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.917913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.918142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.918167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.918268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.918290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.918380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.918402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.918550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.918571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.918678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.918700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.918814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.918836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.918997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.919018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.919173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.919196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.919291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.919314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.919410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.919433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.919592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.919614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.919705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.919726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.919877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.919899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.920053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.920076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.920233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.920255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.920349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.920376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.920542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.920563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.920647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.920668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.920772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.920794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.920907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.920928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.921080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.921102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.921260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.921282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.921479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.921502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.921610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.921634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.921828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.921850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.921944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.921968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.922068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.922090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.922248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.707 [2024-12-05 14:03:27.922274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.707 qpair failed and we were unable to recover it. 00:31:45.707 [2024-12-05 14:03:27.922434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.922457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.922630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.922651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.922815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.922836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.922952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.922974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.923063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.923085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.923232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.923304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.923440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.923479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.923670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.923694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.923940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.923962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.924091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.924113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.924265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.924287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.924407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.924431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.924558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.924580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.924668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.924691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.924803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.924824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.924988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.925009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.925164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.925186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.925295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.925317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.925402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.925424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.925581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.925602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.925698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.925720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.925869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.925891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.926005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.926028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.926124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.926146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.926306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.926328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.926431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.926455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.926577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.926598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.926761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.926783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.926878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.926900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.926998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.927020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.927107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.927130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.927223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.927245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.927397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.708 [2024-12-05 14:03:27.927421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.708 qpair failed and we were unable to recover it. 00:31:45.708 [2024-12-05 14:03:27.927516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.927538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.927622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.927644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.927832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.927854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.927970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.927991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.928086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.928108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.928211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.928232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.928331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.928353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.928471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.928497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.928738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.928760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.928868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.928890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.929054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.929076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.929228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.929249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.929417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.929442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.929535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.929557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.929707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.929729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.929823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.929844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.929946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.929967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.930154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.930176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.930359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.930388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.930471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.930492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.930646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.930668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.930829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.930851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.930953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.930975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.931067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.931089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.931191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.931214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.931383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.931406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.931577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.931599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.931783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.931804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.931903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.931924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.932009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.932030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.932209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.932230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.932449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.932471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.932624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.932646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.932745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.932766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.932920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.932945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.933047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.933068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.933253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.933277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.933444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.709 [2024-12-05 14:03:27.933466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.709 qpair failed and we were unable to recover it. 00:31:45.709 [2024-12-05 14:03:27.933636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.933658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.933809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.933832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.933997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.934019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.934180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.934202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.934388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.934412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.934515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.934538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.934703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.934724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.934872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.934894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.935067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.935091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.935206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.935227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.935384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.935408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.935510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.935531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.935747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.935768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.935853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.935875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.935980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.936001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.936109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.936130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.936351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.936382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.936485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.936507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.936689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.936711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.936805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.936828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.937042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.937064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.937158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.937180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.937352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.937381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.937636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.937658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.937849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.937871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.937964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.937986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.938068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.938090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.938174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.938195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.938291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.938312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.938473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.938495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.938646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.938667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.938835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.938857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.938951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.938972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.939228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.939251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.939417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.939439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.939624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.939646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.939881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.939903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.940069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.940094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.710 qpair failed and we were unable to recover it. 00:31:45.710 [2024-12-05 14:03:27.940264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.710 [2024-12-05 14:03:27.940287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.940456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.940478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.940645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.940666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.940830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.940852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.941020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.941043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.941146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.941168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.941357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.941387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.941551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.941573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.941732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.941754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.941862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.941883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.942044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.942066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.942233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.942255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.942419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.942442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.942618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.942639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.942855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.942877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.942961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.942984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.943134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.943157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.943321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.943342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.943458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.943481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.943631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.943652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.943803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.943824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.943989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.944013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.944228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.944250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.944348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.944376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.944487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.944509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.944605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.944627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.944714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.944740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.944980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.945002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.945103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.945124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.945217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.945238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.945318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.945340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.945444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.945467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.945639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.945660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.945743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.945765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.945926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.945946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.946141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.946163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.946358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.946389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.946505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.946528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.946694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.711 [2024-12-05 14:03:27.946716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.711 qpair failed and we were unable to recover it. 00:31:45.711 [2024-12-05 14:03:27.946817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.946838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.946919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.946942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.947029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.947051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.947219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.947240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.947404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.947427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.947526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.947550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.947710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.947731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.947818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.947837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.947923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.947946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.948123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.948144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.948257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.948279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.948459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.948482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.948714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.948736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.948830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.948851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.949001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.949022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.949260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.949282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.949525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.949547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.949638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.949659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.949771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.949792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.949955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.949976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.950220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.950243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.950522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.950544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.950643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.950666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.950837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.950859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.951023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.951046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.951266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.951287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.951436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.951460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.951561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.951584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.951807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.951833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.951928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.951950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.952168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.952189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.952295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.952317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.952482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.952503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.952691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.952713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.952866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.952887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.953055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.953077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.953239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.953260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.953351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.712 [2024-12-05 14:03:27.953377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.712 qpair failed and we were unable to recover it. 00:31:45.712 [2024-12-05 14:03:27.953541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.953563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.953657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.953678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.953787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.953808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.953958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.953980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.954147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.954169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.954390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.954412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.954570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.954592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.954682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.954702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.954794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.954816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.954922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.954943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.955106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.955127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.955228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.955249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.955398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.955420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.955506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.955527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.955630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.955651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.955764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.955784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.955877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.955899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.955991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.956016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.956234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.956256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.956418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.956440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.956547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.956569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.956655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.956678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.956778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.956800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.956958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.956981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.957072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.957094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.957179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.957200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.957293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.957314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.957397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.957418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.957566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.957588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.957691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.957713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.957794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.957815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.957915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.957936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.958020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.713 [2024-12-05 14:03:27.958043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.713 qpair failed and we were unable to recover it. 00:31:45.713 [2024-12-05 14:03:27.958136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.958157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.958245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.958266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.958379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.958402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.958563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.958587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.958731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.958754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.958846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.958867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.958964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.958986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.959249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.959271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.959448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.959471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.959636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.959657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.959875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.959896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.960151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.960174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.960275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.960296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.960495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.960518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.960706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.960728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.960969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.960990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.961166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.961188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.961347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.961372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.961520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.961541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.961752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.961774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.961858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.961880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.962044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.962067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.962224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.962245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.962426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.962448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.962688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.962710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.962857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.962883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.963047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.963069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.963188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.963208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.963363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.963392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.963557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.963579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.963678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.963701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.963811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.963833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.963944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.963966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.964052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.964075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.964175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.964197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.964349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.964393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.964555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.964577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.964681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.964703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.714 qpair failed and we were unable to recover it. 00:31:45.714 [2024-12-05 14:03:27.964853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.714 [2024-12-05 14:03:27.964875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.964978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.965001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.965091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.965113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.965283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.965306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.965468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.965491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.965575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.965597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.965695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.965716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.965811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.965833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.965982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.966004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.966150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.966171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.966280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.966303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.966384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.966406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.966490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.966511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.966750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.966771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.967019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.967041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.967263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.967285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.967376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.967399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.967552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.967574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.967666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.967688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.967795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.967817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.967921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.967945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.968111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.968133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.968220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.968241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.968420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.968443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.968612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.968634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.968889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.968912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.969075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.969097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.969242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.969266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.969455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.969478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.969578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.969599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.969774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.969796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.969945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.969968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.970210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.970231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.970322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.970343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.970587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.970611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.970802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.970824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.970981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.971003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.971095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.971118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.715 [2024-12-05 14:03:27.971230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.715 [2024-12-05 14:03:27.971251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.715 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.971405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.971428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.971653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.971675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.971834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.971856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.972009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.972032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.972249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.972270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.972363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.972408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.972584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.972607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.972700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.972721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.972888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.972910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.973061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.973082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.973320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.973341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.973560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.973584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.973820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.973842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.973935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.973956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.974171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.974193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.974294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.974316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.974420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.974447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.974607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.974630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.974790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.974813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.974913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.974935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.975020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.975041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.975283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.975305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.975571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.975593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.975710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.975732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.975836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.975858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.975955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.975977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.976141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.976163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.976414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.976447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.976631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.976652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.976829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.976851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.977010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.977033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.977111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.977132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.977241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.977263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.977449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.977472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.977571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.977592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.977762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.977783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.977944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.977966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.978058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.978079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.716 qpair failed and we were unable to recover it. 00:31:45.716 [2024-12-05 14:03:27.978239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.716 [2024-12-05 14:03:27.978260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.978409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.978432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.978580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.978604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.978760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.978782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.978944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.978967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.979137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.979159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.979323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.979345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.979582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.979604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.979765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.979787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.979880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.979903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.980141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.980163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.980317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.980340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.980509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.980532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.980631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.980652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.980819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.980841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.981001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.981024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.981172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.981193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.981294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.981316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.981556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.981579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.981670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.981692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.981837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.981859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.981962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.981985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.982227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.982300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.982587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.982625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.982826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.982861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.983024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.983049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.983232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.983254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.983349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.983376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.983532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.983555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.983664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.983686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.983833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.983855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.984073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.984094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.984188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.984210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.984441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.717 [2024-12-05 14:03:27.984465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.717 qpair failed and we were unable to recover it. 00:31:45.717 [2024-12-05 14:03:27.984686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.984709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.984874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.984895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.984992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.985014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.985167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.985189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.985299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.985321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.985471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.985494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.985654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.985676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.985898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.985929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.986056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.986089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.986279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.986312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.986443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.986477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.986648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.986680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.986802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.986840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.986949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.986982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.987182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.987214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.987315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.987347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.987620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.987652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.987858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.987880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.987971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.987992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.988153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.988225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.988390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.988428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.988554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.988577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.988728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.988749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.988853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.988875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.988962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.988982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.989080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.989101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.989358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.989386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.989543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.989565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.989720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.989742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.989943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.989965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.990062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.990085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.990234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.990256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.990409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.990433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.990593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.990614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.990793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.990815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.990905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.990926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.991110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.991134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.718 [2024-12-05 14:03:27.991238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.718 [2024-12-05 14:03:27.991260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.718 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.991429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.991452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.991652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.991679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.991777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.991798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.991905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.991927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.992025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.992049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.992127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.992148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.992307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.992328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.992509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.992535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.992687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.992709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.992816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.992837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.993002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.993025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.993191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.993214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.993300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.993322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.993420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.993442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.993531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.993552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.993770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.993841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.994063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.994099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.994286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.994319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.994552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.994588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.994705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.994736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.994950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.994982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.995187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.995212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.995311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.995332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.995584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.995607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.995707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.995728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.995878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.995900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.996054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.996077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.996242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.996264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.996421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.996444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.996534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.996555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.996652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.996673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.996754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.996775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.996940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.996963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.997050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.997071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.997223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.997244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.997398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.997421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.997598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.997622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.997729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.719 [2024-12-05 14:03:27.997750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.719 qpair failed and we were unable to recover it. 00:31:45.719 [2024-12-05 14:03:27.997988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:27.998011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:27.998171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:27.998192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:27.998375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:27.998398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:27.998576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:27.998599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:27.998767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:27.998794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:27.998894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:27.998917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:27.998999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:27.999019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:27.999104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:27.999124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:27.999223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:27.999244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:27.999328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:27.999349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:27.999441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:27.999462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:27.999627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:27.999648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:27.999742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:27.999763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:27.999982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.000003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.000173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.000195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.000297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.000319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.000490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.000513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.000737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.000759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.000870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.000893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.000978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.000999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.001082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.001103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.001324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.001345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.001448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.001470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.001630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.001653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.001811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.001834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.001914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.001935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.002043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.002066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.002163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.002186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.002302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.002324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.002511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.002535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.002634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.002656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.002810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.002836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.002988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.003010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.003104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.003129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.003301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.003323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.003413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.003436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.003600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.003623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.003712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.003732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.720 [2024-12-05 14:03:28.003840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.720 [2024-12-05 14:03:28.003862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.720 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.004018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.004040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.004139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.004161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.004341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.004363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.004526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.004548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.004634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.004656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.004747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.004768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.004923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.004995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.005222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.005258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.005394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.005429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.005564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.005597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.005701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.005732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.005920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.005953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.006130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.006155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.006310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.006332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.006424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.006446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.006611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.006632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.006716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.006737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.006954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.006976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.007126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.007149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.007313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.007335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.007547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.007571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.007853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.007875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.008111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.008134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.008291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.008313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.008483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.008507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.008676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.008698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.008807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.008828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.008991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.009013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.009192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.009215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.009316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.009338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.009439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.009462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.009612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.009635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.009734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.009755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.010011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.010046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.010171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.010212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.010340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.010383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.010496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.010520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.010684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.721 [2024-12-05 14:03:28.010706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.721 qpair failed and we were unable to recover it. 00:31:45.721 [2024-12-05 14:03:28.010860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.010881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.010981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.011002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.011153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.011176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.011278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.011302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.011460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.011482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.011566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.011588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.011741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.011763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.011851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.011871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.012035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.012057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.012229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.012251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.012421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.012444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.012544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.012566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.012718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.012740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.012830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.012852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.013016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.013038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.013122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.013143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.013303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.013325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.013499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.013522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.013674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.013696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.013797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.013819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.013907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.013930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.014035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.014058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.014152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.014177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.014335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.014357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.014586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.014608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.014707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.014730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.014845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.014866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.014960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.014982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.015134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.015155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.015236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.015256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.015509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.015534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.015626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.015650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.015797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.015818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.722 [2024-12-05 14:03:28.015912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.722 [2024-12-05 14:03:28.015935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.722 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.016108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.016131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.016379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.016401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.016510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.016533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.016753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.016774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.016891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.016913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.017122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.017144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.017253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.017275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.017388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.017412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.017649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.017671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.017844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.017866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.017982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.018003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.018093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.018113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.018329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.018351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.018593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.018616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.018714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.018738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.018915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.018944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.019187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.019209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.019321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.019344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.019450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.019473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.019625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.019647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.019734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.019755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.019857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.019878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.019977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.019999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.020109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.020133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.020222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.020244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.020401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.020424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.020590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.020613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.020832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.020854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.021016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.021038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.021135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.021158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.021323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.021345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.021503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.021526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.021674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.021697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.021788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.021810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.021966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.021988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.022214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.022236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.022387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.022410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.022562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.723 [2024-12-05 14:03:28.022585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.723 qpair failed and we were unable to recover it. 00:31:45.723 [2024-12-05 14:03:28.022698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.022720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.022815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.022838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.023009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.023032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.023216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.023237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.023327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.023348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.023452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.023474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.023569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.023591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.023746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.023768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.023851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.023874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.024042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.024064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.024166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.024187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.024310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.024333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.024491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.024513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.024752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.024775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.024869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.024891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.024973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.024995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.025084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.025105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.025262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.025283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.025401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.025429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.025542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.025565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.025729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.025752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.025834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.025857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.026025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.026049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.026206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.026228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.026419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.026442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.026616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.026638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.026862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.026883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.026969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.026991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.027140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.027161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.027328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.027351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.027515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.027537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.027689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.027710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.027876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.027898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.028003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.028025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.028210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.028232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.028416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.028439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.028589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.028612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.028755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.028777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.724 [2024-12-05 14:03:28.028927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.724 [2024-12-05 14:03:28.028949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.724 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.029060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.029082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.029185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.029206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.029375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.029398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.029498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.029520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.029672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.029693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.029841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.029863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.029964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.029986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.030072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.030098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.030198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.030220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.030380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.030403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.030488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.030511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.030603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.030626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.030792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.030814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.030968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.030990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.031092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.031114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.031203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.031225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.031305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.031328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.031517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.031541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.031704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.031729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.031813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.031835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.031924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.031946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.032031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.032053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.032216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.032238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.032458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.032482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.032754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.032777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.032862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.032883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.033119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.033141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.033291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.033313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.033496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.033519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.033629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.033651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.033833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.033856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.034006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.034027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.034243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.034266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.034418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.034441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.034634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.034657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.034822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.034844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.034927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.034947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.035058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.725 [2024-12-05 14:03:28.035080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.725 qpair failed and we were unable to recover it. 00:31:45.725 [2024-12-05 14:03:28.035176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.035199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.035438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.035463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.035557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.035580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.035746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.035769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.035925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.035946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.036098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.036120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.036226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.036248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.036349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.036377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.036540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.036562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.036721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.036747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.036851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.036872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.036968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.036989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.037167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.037190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.037299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.037321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.037546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.037569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.037676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.037698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.037852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.037874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.037979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.038002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.038110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.038133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.038238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.038261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.038455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.038478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.038632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.038654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.038733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.038755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.038869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.038891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.039056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.039079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 827454 Killed "${NVMF_APP[@]}" "$@" 00:31:45.726 [2024-12-05 14:03:28.039173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.039198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.039282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.039303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.039479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.039502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.039668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.039691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.039782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.039805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:45.726 [2024-12-05 14:03:28.039918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.039940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.040087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.040109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.040195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.040217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:45.726 [2024-12-05 14:03:28.040396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.040420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 [2024-12-05 14:03:28.040519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.726 [2024-12-05 14:03:28.040545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.726 qpair failed and we were unable to recover it. 00:31:45.726 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:45.727 [2024-12-05 14:03:28.040732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.040756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.040921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:45.727 [2024-12-05 14:03:28.040944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.041050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.041072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.041179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.041202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:45.727 [2024-12-05 14:03:28.041360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.041390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.041493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.041515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.041675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.041697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.041854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.041875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.041977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.041998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.042110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.042133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.042385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.042408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.042579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.042602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.042704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.042730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.042889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.042913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.043000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.043022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.043130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.043154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.043303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.043325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.043494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.043518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.043606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.043629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.043786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.043807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.043887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.043910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.043993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.044016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.044200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.044222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.044441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.044464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.044585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.044609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.044695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.044718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.044958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.045030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.045242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.045276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.045452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.045486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.045598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.045631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.045806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.045839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.045953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.045986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.046098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.046122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.046309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.046331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.727 [2024-12-05 14:03:28.046500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.727 [2024-12-05 14:03:28.046523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.727 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.046686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.046708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.046791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.046814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.046899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.046921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.047085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.047107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.047263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.047286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.047445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.047468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.047624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.047645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.047763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.047786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=828186 00:31:45.728 [2024-12-05 14:03:28.047883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.047906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.047990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.048012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.048164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 828186 00:31:45.728 [2024-12-05 14:03:28.048188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:45.728 [2024-12-05 14:03:28.048285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.048308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.048473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.048496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.048585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.048608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 828186 ']' 00:31:45.728 [2024-12-05 14:03:28.048709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.048731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.048848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.048871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.048982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.728 [2024-12-05 14:03:28.049006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.049093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.049114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.049221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.049244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:45.728 [2024-12-05 14:03:28.049333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.049357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.049538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.049565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.728 [2024-12-05 14:03:28.049675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.049698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.049796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.049819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:45.728 [2024-12-05 14:03:28.049920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.049942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.050049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.050073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:45.728 [2024-12-05 14:03:28.050251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.050275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.050373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.050396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.050572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.050595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.050763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.050785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.050950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.050973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.051123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.051144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.051384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.051407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.051572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.051594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.051687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.728 [2024-12-05 14:03:28.051710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.728 qpair failed and we were unable to recover it. 00:31:45.728 [2024-12-05 14:03:28.051800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.051822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.052020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.052043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.052124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.052149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.052326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.052349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.052447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.052470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.052706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.052728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.052894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.052918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.053030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.053052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.053143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.053167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.053384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.053406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.053627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.053650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.053755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.053777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.053929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.053951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.054114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.054139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.054224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.054246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.054357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.054384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.054480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.054502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.054589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.054611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.054714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.054736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.054881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.054904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.055012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.055049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.055237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.055270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.055451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.055485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.055656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.055681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.055799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.055822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.055926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.055950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.056123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.056146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.056235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.056258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.056404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.056428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.056507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.056530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.056617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.056640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.056741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.056763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.056853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.056875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.057025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.057047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.057140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.057162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.057271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.057295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.057460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.057484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.057643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.057665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.729 qpair failed and we were unable to recover it. 00:31:45.729 [2024-12-05 14:03:28.057840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.729 [2024-12-05 14:03:28.057864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.057951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.057973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.058087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.058108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.058286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.058309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.058399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.058422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.058585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.058609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.058758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.058780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.058891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.058914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.059072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.059095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.059175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.059201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.059314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.059336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.059493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.059516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.059670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.059693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.059791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.059812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.059961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.059982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.060089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.060111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.060206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.060228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.060390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.060415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.060509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.060534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.060617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.060638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.060803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.060825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.060914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.060937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.061051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.061073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.061324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.061346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.061502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.061524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.061634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.061656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.061810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.061832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.061915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.061937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.062042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.062066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.062232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.062255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.062348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.062375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.062473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.062496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.062608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.062630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.062871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.062894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.062975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.062997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.063089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.063112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.063198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.063219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.063381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.730 [2024-12-05 14:03:28.063404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.730 qpair failed and we were unable to recover it. 00:31:45.730 [2024-12-05 14:03:28.063500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.063524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.063625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.063647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.063876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.063899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.063993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.064015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.064101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.064123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.064342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.064365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.064600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.064623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.064747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.064769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.064859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.064881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.065145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.065168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.065272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.065295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.065450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.065474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.065649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.065672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.065839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.065862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.066079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.066100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.066191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.066212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.066310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.066333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.066491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.066515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.066610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.066634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.066729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.066750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.066931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.066955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.067102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.067123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.067274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.067295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.067458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.067481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.067565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.067587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.067696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.067718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.067834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.067858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.068036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.068058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.068155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.731 [2024-12-05 14:03:28.068180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.731 qpair failed and we were unable to recover it. 00:31:45.731 [2024-12-05 14:03:28.068274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.068297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.068389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.068414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.068523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.068546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.068655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.068676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.068761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.068783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.068865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.068887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.068975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.068996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.069148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.069171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.069322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.069344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.069452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.069475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.069624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.069650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.069751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.069773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.069866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.069889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.069984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.070007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.070086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.070108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.070182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.070204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.070303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.070324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.070503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.070526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.070624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.070648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.070729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.070751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.070906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.070927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.071027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.071049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.071132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.071154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.071303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.071325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.071508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.071532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.071680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.071702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.071794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.071816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.071902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.071924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.072072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.072094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.072362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.072391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.072505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.072528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.072689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.072711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.072802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.072825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.072927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.072950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.073030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.073054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.073215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.073236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.073403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.732 [2024-12-05 14:03:28.073426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.732 qpair failed and we were unable to recover it. 00:31:45.732 [2024-12-05 14:03:28.073527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.073549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.073663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.073685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.073852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.073875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.073958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.073979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.074131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.074152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.074249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.074271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.074391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.074414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.074523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.074546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.074707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.074730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.074822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.074843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.074924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.074947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.075033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.075054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.075149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.075169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.075265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.075286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.075446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.075474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.075645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.075668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.075752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.075776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.075862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.075885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.075986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.076009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.076162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.076185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.076351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.076379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.076479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.076502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.076586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.076607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.076693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.076715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.076862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.076885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.076998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.077021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.077169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.077191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.077272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.077293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.077388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.077412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.077564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.077587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.077672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.077695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.077799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.077821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.077998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.078020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.078171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.078193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.078290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.078312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.078397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.078420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.733 [2024-12-05 14:03:28.078518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.733 [2024-12-05 14:03:28.078540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.733 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.078642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.078665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.078820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.078844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.078928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.078951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.079035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.079057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.079255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.079283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.079399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.079423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.079574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.079596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.079695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.079717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.079956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.079978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.080068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.080089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.080247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.080271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.080419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.080451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.080683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.080705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.080798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.080821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.080906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.080927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.081023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.081046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.081261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.081284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.081436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.081459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.081641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.081664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.081841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.081863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.081955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.081977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.082061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.082084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.082193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.082217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.082346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.082374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.082548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.082571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.082665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.082686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.082847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.082870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.083025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.083048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.083151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.083175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.083329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.083351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.083517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.083539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.083722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.083743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.083929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.083952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.084119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.084141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.084381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.734 [2024-12-05 14:03:28.084404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.734 qpair failed and we were unable to recover it. 00:31:45.734 [2024-12-05 14:03:28.084509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.084531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.084650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.084673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.084840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.084862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.085014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.085037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.085138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.085160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.085321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.085343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.085451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.085474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.085564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.085586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.085805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.085828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.085914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.085936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.086022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.086048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.086155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.086177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.086360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.086391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.086475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.086497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.086665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.086687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.086836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.086859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.087025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.087049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.087287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.087309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.087486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.087510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.087598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.087620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.087709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.087731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.087828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.087850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.088075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.088097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.088185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.088206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.088313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.088335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.088517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.088541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.088635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.088658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.088737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.088760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.088933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.088956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.089053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.089076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.089222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.089244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.089324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.089347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.089535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.089558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.089739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.089760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.089846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.089869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.089958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.089979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.735 [2024-12-05 14:03:28.090124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.735 [2024-12-05 14:03:28.090146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.735 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.090242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.090268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.090377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.090400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.090565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.090586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.090684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.090706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.090805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.090829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.090986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.091007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.091157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.091179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.091331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.091354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.091452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.091474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.091574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.091596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.091749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.091771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.091918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.091941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.092098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.092120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.092279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.092301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.092404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.092427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.092583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.092605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.092690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.092712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.092867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.092889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.093035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.093057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.093147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.093169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.093256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.093279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.093430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.093453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.093625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.093647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.093827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.093849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.093945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.093966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.094072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.094094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.094272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.094294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.094450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.094473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.736 qpair failed and we were unable to recover it. 00:31:45.736 [2024-12-05 14:03:28.094631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.736 [2024-12-05 14:03:28.094653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.094826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.094849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.094944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.094966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.095135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.095156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.095330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.095353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.095443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.095465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.095634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.095657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.095821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.095843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.095950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.095972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.096081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.096103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.096182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.096204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.096364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.096412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.096576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.096598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.096700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.096726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.096875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.096897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.096999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.097021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.097176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.097199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.097290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.097314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.097396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.097419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.097522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.097544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.097706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.097728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.097874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.097895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.097979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.098001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.098172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.098194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.098284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.098305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.098398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.098420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.098578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.098600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.098762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.098784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.098882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.098904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.098998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.099020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.099116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.099139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.099248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.099270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.099420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.099444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.099476] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:31:45.737 [2024-12-05 14:03:28.099526] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:45.737 [2024-12-05 14:03:28.099614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.099638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.099729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.099750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.099976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.737 [2024-12-05 14:03:28.099997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.737 qpair failed and we were unable to recover it. 00:31:45.737 [2024-12-05 14:03:28.100095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.100116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.100267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.100290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.100452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.100475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.100637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.100664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.100758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.100781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.100885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.100907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.101007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.101029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.101147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.101169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.101320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.101342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.101458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.101482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.101702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.101725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.101837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.101860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.102100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.102125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.102300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.102324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.102536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.102560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.102737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.102761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.102844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.102867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.102961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.102984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.103090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.103111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.103262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.103286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.103505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.103529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.103699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.103722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.103959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.103982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.104083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.104106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.104198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.104219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.104404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.104426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.104540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.104562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.104662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.104685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.104846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.104868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.105035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.105057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.105144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.105169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.105355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.105386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.105491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.105513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.738 [2024-12-05 14:03:28.105604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.738 [2024-12-05 14:03:28.105625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.738 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.105796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.105819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.105992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.106016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.106142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.106164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.106388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.106410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.106519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.106539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.106644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.106666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.106753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.106774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.106922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.106944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.107109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.107131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.107281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.107303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.107425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.107452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.107541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.107563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.107674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.107696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.107792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.107814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.107916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.107938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.108090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.108113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.108207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.108229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.108381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.108405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.108480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.108502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.108665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.108687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.108779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.108802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.108972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.108994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.109107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.109129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.109210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.109232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.109479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.109502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.109612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.109635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.109798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.109820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.110011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.110034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.110129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.110152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.110251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.110275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.110373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.110395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.110487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.110509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.110604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.110626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.110720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.110743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.110914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.110936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.111046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.739 [2024-12-05 14:03:28.111068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.739 qpair failed and we were unable to recover it. 00:31:45.739 [2024-12-05 14:03:28.111153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.111175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.111282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.111307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.111417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.111453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.111606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.111628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.111772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.111793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.111963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.111985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.112146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.112169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.112588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.112612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.112778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.112801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.112958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.112980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.113131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.113154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.113249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.113271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.113491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.113514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.113616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.113638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.113725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.113747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.113841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.113864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.114038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.114060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.114152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.114175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.114278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.114301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.114454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.114478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.114721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.114744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.114908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.114930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.115024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.115047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.115284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.115306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.115408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.115432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.115538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.115561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.115651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.115674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.115775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.115799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.115951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.115975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.116169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.116191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.116410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.116433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.116530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.116553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.116639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.116661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.116829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.116852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.117018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.117040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.117143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.117166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.117319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.117341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.117513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.117536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.740 [2024-12-05 14:03:28.117690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.740 [2024-12-05 14:03:28.117731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.740 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.117836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.117869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.117995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.118027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.118133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.118164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.118415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.118487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.118702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.118740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.119016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.119051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.119228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.119261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.119384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.119418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.119667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.119701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.119818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.119851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.119958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.119991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.120199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.120231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.120439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.120465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.120642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.120664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.120837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.120859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.121031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.121076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.121270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.121303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.121451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.121486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.121702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.121739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.121930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.121962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.122080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.122118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.122241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.122272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.122397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.122430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.122616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.122649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.122767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.122801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.122976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.123009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.123181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.123214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.123339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.123364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.123545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.123568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.123653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.123676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.123898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.123921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.741 [2024-12-05 14:03:28.124080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.741 [2024-12-05 14:03:28.124103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.741 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.124263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.124285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.124505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.124528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.124726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.124747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.124918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.124940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.125105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.125128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.125299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.125331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.125526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.125560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.125746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.125778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.125895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.125919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.126070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.126093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.126245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.126267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.126375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.126398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.126488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.126511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.126668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.126690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.126858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.126880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.126994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.127017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.127187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.127211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.127381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.127405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.127595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.127618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.127800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.127821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.127987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.128009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.128183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.128214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.128340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.128411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.128608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.128640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.128837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.128869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.129042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.129081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.129204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.129236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.129422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.129457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.129627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.129673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.129845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.129866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.129970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.129993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.130078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.130100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.130331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.130352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.130510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.130533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.130615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.130636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.130744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.130765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.130855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.130877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.130967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.130989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.742 qpair failed and we were unable to recover it. 00:31:45.742 [2024-12-05 14:03:28.131235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.742 [2024-12-05 14:03:28.131257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.131436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.131460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.131545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.131567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.131671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.131693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.131803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.131824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.131992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.132013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.132117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.132139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.132245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.132266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.132391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.132414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.132524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.132546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.132705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.132727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.132875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.132897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.133061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.133084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.133178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.133200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.133467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.133493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.133589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.133611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.133828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.133861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.133995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.134027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.134153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.134185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.134466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.134543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.134782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.134820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.135012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.135045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.135161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.135195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.135313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.135348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.135479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.135511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.135678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.135703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.135867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.135888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.135990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.136014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.136099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.136120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.136290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.136313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.136480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.136506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.136663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.136684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.136780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.136800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.136956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.136980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.137077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.137098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.137194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.137215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.137383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.137406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.137592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.137614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.137785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.137806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.137969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.137990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.138078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.138099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.138189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.743 [2024-12-05 14:03:28.138211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.743 qpair failed and we were unable to recover it. 00:31:45.743 [2024-12-05 14:03:28.138301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.138324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.138513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.138537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.138637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.138659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.138877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.138900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.139121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.139144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.139385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.139408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.139496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.139517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.139683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.139706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.139965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.139986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.140066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.140087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.140174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.140196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.140292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.140313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.140411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.140432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.140657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.140680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.140834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.140856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.140943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.140964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.141068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.141092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.141181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.141202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.141288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.141308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.141402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.141423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.141608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.141630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.141854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.141875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.141974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.141994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.142093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.142115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.142225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.142247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.142400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.142422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.142611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.142634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.142737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.142760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.142875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.142897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.142984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.143005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.143118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.143139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.143233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.143254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.143404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.143427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.143673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.143696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.143850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.143873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.144035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.144057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.144150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.144171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.144325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.144346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.144533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.144577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.144809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.144858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.144996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.744 [2024-12-05 14:03:28.145048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.744 qpair failed and we were unable to recover it. 00:31:45.744 [2024-12-05 14:03:28.145257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.145282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.145394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.145417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.145571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.145594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.145687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.145708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.145928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.145950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.146100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.146123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.146272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.146294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.146479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.146501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.146595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.146616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.146765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.146788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.147013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.147034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.147193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.147215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.147374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.147405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.147521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.147543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.147716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.147738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.147830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.147851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.147929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.147949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.148139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.148161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.148255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.148275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.148449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.148473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.148599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.148621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.148770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.148792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.148964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.148985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.149088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.149112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.149195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.149215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.149434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.149456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.149557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.149584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.149760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.149783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.149873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.149893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.150076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.150098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.150201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.150222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.150318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.150343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.150445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.150469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.150696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.150718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.150816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.150838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.150945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.150968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.151197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.151220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.151380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.151403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.745 [2024-12-05 14:03:28.151511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.745 [2024-12-05 14:03:28.151534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.745 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.151685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.151708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.151886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.151921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.152101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.152134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.152390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.152424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.152565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.152590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.152778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.152801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.152917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.152940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.153021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.153042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.153192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.153213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.153291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.153313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.153485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.153510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.153628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.153649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.153912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.153933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.154093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.154115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.154335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.154356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.154478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.154499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.154657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.154678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.154855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.154877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.154975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.154997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.155086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.155107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.155196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.155218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.155325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.155346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.155445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.155467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.155685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.155708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.155801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.155824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.155937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.155959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.156050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.156072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.156220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.156242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.156394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.156420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.156669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.746 [2024-12-05 14:03:28.156690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.746 qpair failed and we were unable to recover it. 00:31:45.746 [2024-12-05 14:03:28.156778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.156799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.156894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.156915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.157026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.157048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.157256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.157278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.157449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.157471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.157701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.157723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.157915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.157937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.158085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.158107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.158323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.158346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.158437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.158458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.158630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.158651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.158750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.158772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.158874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.158897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.159012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.159034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.159193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.159217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.159461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.159483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.159582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.159604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.159766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.159788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.159935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.159958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.160196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.160217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.160425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.160448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.160689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.160711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.160863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.160885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.161053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.161077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.161186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.161209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.161385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.161413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.161507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.161530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.161680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.161703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.161855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.161879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.162038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.162061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.162152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.162174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.162279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.162302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.162469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.162491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.162657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.162679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.162834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.162856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.163074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.163096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.163190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.163211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.163304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.163324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.747 [2024-12-05 14:03:28.163423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.747 [2024-12-05 14:03:28.163446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.747 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.163601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.163625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.163869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.163890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.163977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.163997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.164091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.164114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.164271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.164293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.164446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.164469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.164563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.164586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.164739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.164761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.164865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.164888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.164990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.165012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.165259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.165282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.165449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.165472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.165562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.165584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.165684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.165705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.165942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.165964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.166249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.166271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.166412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.166434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.166528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.166550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.166736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.166758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.166945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.166967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.167134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.167155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.167322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.167344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.167569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.167592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.167710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.167733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.167960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.167981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.168129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.168151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.168261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.168283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.168403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.168430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.168616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.168638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.168784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.168806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.168963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.168985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.169097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.169119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.169229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.169251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.169351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.169380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.169552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.169574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.169656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.169675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.169776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.169797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.169915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.748 [2024-12-05 14:03:28.169936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.748 qpair failed and we were unable to recover it. 00:31:45.748 [2024-12-05 14:03:28.170095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.170116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.170206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.170228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.170329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.170350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.170463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.170485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.170580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.170602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.170700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.170722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.170825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.170847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.171010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.171033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.171192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.171215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.171296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.171317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.171472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.171496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.171660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.171683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.171904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.171926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.172026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.172049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.172206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.172228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.172400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.172424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.172581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.172603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.172691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.172712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.172808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.172831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.172932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.172954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.173081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.173103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.173216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.173238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.173415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.173439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.173547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.173571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.173672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.173693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.173791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.173814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.173913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.173935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.174047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.174069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.174240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.174262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.174440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.174462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.174681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.174750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.175049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.175087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.175280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.175314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.175485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.175509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.175750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.175771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.176000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.176021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.176127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.176148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.176245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.749 [2024-12-05 14:03:28.176267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.749 qpair failed and we were unable to recover it. 00:31:45.749 [2024-12-05 14:03:28.176436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.176459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.176587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.176609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.176760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.176781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.176952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.176974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.177188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.177209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.177319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.177340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.177509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.177532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.177718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.177741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.177848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.177870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.177960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.177982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.178068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.178089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.178182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.178203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.178381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.178405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.178554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.178576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.178668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.178690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.178930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.178952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.179122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.179143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.179300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.179322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.179410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.179433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.179584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.179610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.179716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.179739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.179918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.179940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.180050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.180073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.180137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:45.750 [2024-12-05 14:03:28.180257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.180281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.180438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.180462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.180623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.180646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.180738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.180762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.180932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.180954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.181107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.181129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.181223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.181246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.750 [2024-12-05 14:03:28.181462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.750 [2024-12-05 14:03:28.181485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.750 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.181652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.181675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.181935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.181957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.182059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.182081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.182300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.182321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.182481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.182504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.182717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.182739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.182820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.182842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.183002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.183026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.183115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.183136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.183253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.183275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.183387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.183411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.183653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.183674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.183770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.183792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.183962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.183985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.184141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.184163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.184325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.184351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.184473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.184497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.184585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.184606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.184706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.184729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.184816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.184838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.184948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.184970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.185055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.185078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.185173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.185196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.185287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.185309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.185407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.185431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.185517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.185539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.185643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.185665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.185753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.185775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.185869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.185892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.185986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.186008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.186124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.186148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.186301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.186323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.186441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.186464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.186550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.186571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.186656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.186677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.186899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.186922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.187026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.187048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.187142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.751 [2024-12-05 14:03:28.187164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.751 qpair failed and we were unable to recover it. 00:31:45.751 [2024-12-05 14:03:28.187339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.187362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.187538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.187560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.187727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.187750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.187971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.187994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.188165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.188191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.188361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.188390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.188557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.188580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.188774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.188798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.188904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.188927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.189092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.189117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.189266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.189290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.189460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.189483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.189653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.189676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.189832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.189855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.189938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.189958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.190132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.190157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.190268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.190291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.190381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.190404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.190503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.190526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.190618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.190640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.190751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.190773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.190875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.190899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.191070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.191095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.191192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.191217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.191387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.191411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.191510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.191532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.191750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.191773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.191868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.191890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.192105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.192127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.192384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.192408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.192508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.192530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.192697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.192720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.192885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.192908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.193077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.193101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.193259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.193281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.193443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.193466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.193572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.193594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.752 qpair failed and we were unable to recover it. 00:31:45.752 [2024-12-05 14:03:28.193740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.752 [2024-12-05 14:03:28.193762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.194007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.194031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.194180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.194202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.194309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.194332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.194450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.194473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.194556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.194579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.194678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.194701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.194812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.194834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.194930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.194956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.195112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.195145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.195302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.195323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.195413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.195440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.195552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.195573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.195685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.195707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.195794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.195817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.195900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.195922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.196019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.196041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.196190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.196213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.196376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.196398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.196487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.196509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.196657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.196679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.196769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.196791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.196963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.196985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.197101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.197125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.197291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.197314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.197546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.197571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.197753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.197779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.197943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.197966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.198118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.198141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.198235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.198258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.198420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.198445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.198605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.198627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.198717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.198739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.198916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.198939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.199170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.199193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.199439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.199467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.199635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.199657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.199776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.199798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.199925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.753 [2024-12-05 14:03:28.199948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.753 qpair failed and we were unable to recover it. 00:31:45.753 [2024-12-05 14:03:28.200189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.200211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.200317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.200340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.200443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.200468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.200621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.200646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.200814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.200836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.200937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.200961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.201127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.201149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.201249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.201272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.201443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.201468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.201569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.201591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.201699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.201722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.201826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.201848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.201951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.201974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.202055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.202078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.202238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.202261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.202372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.202395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.202659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.202682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.202778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.202800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.202953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.202975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.203130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.203152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.203400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.203426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.203587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.203611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.203770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.203802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.203899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.203921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.204027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.204049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.204150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.204173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.204342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.204365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.204536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.204560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.204659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.204682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.204910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.204933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.205034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.205057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.205208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.205231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.205449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.205473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.205628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.205650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.205874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.205897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.206010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.206033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.206210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.206232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.206481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.754 [2024-12-05 14:03:28.206510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.754 qpair failed and we were unable to recover it. 00:31:45.754 [2024-12-05 14:03:28.206755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.206777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.206902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.206924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.207022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.207044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.207287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.207309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.207478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.207502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.207654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.207678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.207774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.207796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.207888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.207910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.208080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.208104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.208398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.208422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.208593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.208616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.208785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.208809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.208987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.209009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.209167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.209189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.209339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.209362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.209590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.209612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.209708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.209730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.209879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.209901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.210000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.210023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.210128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.210150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.210318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.210342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.210506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.210529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.210620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.210642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.210810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.210833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.211078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.211101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.211217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.211240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.211342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.211376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.211542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.211564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.211737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.211759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.211851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.211875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.211971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.211995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.212076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.212097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.212316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.755 [2024-12-05 14:03:28.212338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.755 qpair failed and we were unable to recover it. 00:31:45.755 [2024-12-05 14:03:28.212438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.212461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.212613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.212636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.212731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.212755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.212947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.212971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.213212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.213234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.213346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.213390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.213655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.213679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.213860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.213907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.214038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.214072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.214253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.214294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.214499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.214524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.214685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.214708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.214926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.214949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.215057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.215081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.215302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.215325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.215558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.215582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.215804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.215827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.215918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.215941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.216107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.216130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.216211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.216235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.216406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.216430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.216534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.216557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.216820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.216844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.217007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.217029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.217212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.217234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.217382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.217406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.217509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.217531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.217702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.217728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.217883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.217909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.218009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.218034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.218196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.218223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.218383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.218409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.218507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.218531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.218637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.218660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.218760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.218788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.219021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.219046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.219147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.219171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.219325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.219351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.756 [2024-12-05 14:03:28.219535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.756 [2024-12-05 14:03:28.219561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.756 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.219846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.219872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.219979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.220013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.220236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.220260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.220417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.220439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.220633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.220632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:45.757 [2024-12-05 14:03:28.220656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 [2024-12-05 14:03:28.220663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:45.757 [2024-12-05 14:03:28.220673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.220680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:45.757 [2024-12-05 14:03:28.220685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:45.757 [2024-12-05 14:03:28.220745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.220767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.220930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.220951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.221045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.221067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.221228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.221249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.221345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.221373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.221496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.221519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.221687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.221709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.221790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.221811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.221970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.221991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.222164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.222185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.222353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.222304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:45.757 [2024-12-05 14:03:28.222416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.222404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:45.757 [2024-12-05 14:03:28.222541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.222512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:45.757 [2024-12-05 14:03:28.222565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.222513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:45.757 [2024-12-05 14:03:28.222668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.222691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.222775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.222797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.222911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.222936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.223090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.223112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.223203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.223225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.223458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.223481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.223586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.223607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.223779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.223806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.223928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.223949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.224193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.224223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.224331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.224353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.224462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.224484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.224640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.224663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.224811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.224834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.225001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.225023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.225127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.225149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.225249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.757 [2024-12-05 14:03:28.225270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.757 qpair failed and we were unable to recover it. 00:31:45.757 [2024-12-05 14:03:28.225507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.225531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.225698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.225721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.225818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.225841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.225958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.225982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.226197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.226220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.226380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.226405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.226575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.226600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.226694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.226716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.226818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.226840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.226998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.227021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.227105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.227127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.227214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.227235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.227401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.227430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.227652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.227677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.227773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.227794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.227946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.227968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.228073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.228097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.228189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.228214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.228317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.228341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.228543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.228567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.228786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.228809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.228962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.228985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.229080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.229103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.229263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.229288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.229383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.229406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.229575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.229599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.229804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.229854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.229973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.230007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.230246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.230285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.230410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.230437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.230679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.230702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.230873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.230896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.231046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.231068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.231179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.231201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.231306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.231328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.231419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.231442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.231539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.231560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.231727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.231749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.758 qpair failed and we were unable to recover it. 00:31:45.758 [2024-12-05 14:03:28.231918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.758 [2024-12-05 14:03:28.231942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.232196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.232218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.232333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.232356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.232579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.232602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.232682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.232703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.232875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.232897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.233052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.233077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.233229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.233253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.233338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.233358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.233567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.233591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.233678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.233700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.233781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.233804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.233969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.233991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.234248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.234272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.234378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.234401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.234564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.234591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.234830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.234856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.234969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.234993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.235162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.235195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.235363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.235412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.235530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.235552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.235635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.235656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.235875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.235898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.236014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.236041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.236139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.236162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.236384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.236410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.236513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.236538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.236694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.236717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.236828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.236851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.237035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.237059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.237301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.237324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.237551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.237576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.237757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.237779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.237931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.237955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.759 qpair failed and we were unable to recover it. 00:31:45.759 [2024-12-05 14:03:28.238060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.759 [2024-12-05 14:03:28.238083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.238299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.238324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.238499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.238523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.238703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.238725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.238822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.238844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.238997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.239019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.239118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.239140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.239325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.239349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.239465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.239519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.239788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.239864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.240115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.240168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.240291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.240316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.240432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.240465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.240700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.240721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.240893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.240922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.241066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.241088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.241244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.241266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.241507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.241530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.241626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.241647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.241759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.241781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.241975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.242008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.242165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.242188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.242381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.242405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.242567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.242590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.242689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.242711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.242863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.242887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.243071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.243093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.243200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.243221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.243404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.243427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.243588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.243610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.243696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.243718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.243866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.243889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.243988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.244010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.244189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.244211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.244301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.244321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.244515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.244539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.244760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.244782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.245018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.245041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.760 [2024-12-05 14:03:28.245150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.760 [2024-12-05 14:03:28.245172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.760 qpair failed and we were unable to recover it. 00:31:45.761 [2024-12-05 14:03:28.245426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.761 [2024-12-05 14:03:28.245451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.761 qpair failed and we were unable to recover it. 00:31:45.761 [2024-12-05 14:03:28.245617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.761 [2024-12-05 14:03:28.245641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.761 qpair failed and we were unable to recover it. 00:31:45.761 [2024-12-05 14:03:28.245905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.761 [2024-12-05 14:03:28.245927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.761 qpair failed and we were unable to recover it. 00:31:45.761 [2024-12-05 14:03:28.246101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.761 [2024-12-05 14:03:28.246124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.761 qpair failed and we were unable to recover it. 00:31:45.761 [2024-12-05 14:03:28.246339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.761 [2024-12-05 14:03:28.246361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.761 qpair failed and we were unable to recover it. 00:31:45.761 [2024-12-05 14:03:28.246450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.761 [2024-12-05 14:03:28.246471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.761 qpair failed and we were unable to recover it. 00:31:45.761 [2024-12-05 14:03:28.246667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.761 [2024-12-05 14:03:28.246690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.761 qpair failed and we were unable to recover it. 00:31:45.761 [2024-12-05 14:03:28.246773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.761 [2024-12-05 14:03:28.246795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.761 qpair failed and we were unable to recover it. 00:31:45.761 [2024-12-05 14:03:28.246956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.761 [2024-12-05 14:03:28.246980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:45.761 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.247092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.247115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.247398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.247438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.247637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.247669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.247847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.247880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.248107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.248131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.248328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.248350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.248459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.248483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.248651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.248674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.248888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.248911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.249074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.249096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.249197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.249218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.249384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.249407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.249566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.249588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.249687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.249710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.249894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.249916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.250089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.250111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.250264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.250288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.250393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.250416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.250579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.250602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.250683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.250705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.250862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.250886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.043 [2024-12-05 14:03:28.251038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.043 [2024-12-05 14:03:28.251061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.043 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.251168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.251189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.251352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.251379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.251551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.251573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.251731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.251755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.251847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.251869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.252021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.252043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.252147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.252174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.252334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.252357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.252466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.252488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.252585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.252607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.252760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.252781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.252859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.252881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.253076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.253102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.253294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.253317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.253476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.253510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.253664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.253687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.253866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.253888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.254044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.254068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.254154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.254175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.254282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.254305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.254473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.254498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.254593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.254616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.254770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.254796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.254899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.254923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.255172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.255196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.255299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.255323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.255422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.255447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.255656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.255679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.255781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.255803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.255913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.255936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.256032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.256055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.256218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.256241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.256407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.256431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.256536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.256560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.256650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.256671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.256865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.256888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.257106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.257130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.257241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.044 [2024-12-05 14:03:28.257265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.044 qpair failed and we were unable to recover it. 00:31:46.044 [2024-12-05 14:03:28.257416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.257439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.257523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.257544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.257633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.257656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.257809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.257831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.258003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.258026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.258137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.258160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.258321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.258344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.258440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.258462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.258543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.258565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.258759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.258807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.258999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.259037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.259250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.259282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.259480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.259514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.259692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.259724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.259981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.260013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.260186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.260210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.260315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.260336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.260490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.260514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.260623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.260645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.260883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.260906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.260998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.261020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.261171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.261193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.261351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.261390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.261614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.261636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.261738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.261759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.261876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.261899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.262009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.262032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.262120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.262143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.262251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.262274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.262380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.262404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.262521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.262545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.262699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.262722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.262809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.262832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.263036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.263058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.263151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.263173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.263322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.263345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.263459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.263487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.263576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.263597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.045 qpair failed and we were unable to recover it. 00:31:46.045 [2024-12-05 14:03:28.263748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.045 [2024-12-05 14:03:28.263773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.263940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.263963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.264131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.264154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.264269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.264291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.264445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.264469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.264653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.264676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.264881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.264905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.264989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.265014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.265185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.265207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.265310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.265333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.265440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.265464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.265623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.265645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.265750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.265773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.265859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.265881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.266039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.266061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.266302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.266326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.266445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.266469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.266580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.266605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.266696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.266719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.266812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.266835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.266943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.266966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.267115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.267139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.267337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.267359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.267483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.267506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.267667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.267691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.267789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.267817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.267930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.267952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.268065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.268088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.268197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.268220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.268381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.268408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.268562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.268585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.268670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.268692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.268790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.268814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.268901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.268924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.269019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.269040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.269192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.269214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.269331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.269356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.269475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.269498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.046 qpair failed and we were unable to recover it. 00:31:46.046 [2024-12-05 14:03:28.269714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.046 [2024-12-05 14:03:28.269739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.269842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.269865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.270070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.270093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.270183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.270207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.270306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.270329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.270515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.270539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.270685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.270707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.270805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.270827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.270998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.271020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.271118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.271140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.271241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.271264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.271499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.271523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.271632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.271655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.271740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.271762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.271924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.271947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.272187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.272210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.272309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.272331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.272515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.272537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.272631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.272654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.272836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.272859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.273027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.273050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.273153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.273174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.273288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.273311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.273498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.273522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.273618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.273639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.273731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.273753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.273857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.273879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.274141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.274164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.274274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.274302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.274505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.274529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.274726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.274749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.274946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.274969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.275055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.275078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.275230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.275252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.275374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.275399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.047 qpair failed and we were unable to recover it. 00:31:46.047 [2024-12-05 14:03:28.275490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.047 [2024-12-05 14:03:28.275511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.275608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.275630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.275754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.275777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.275995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.276020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.276181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.276205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.276313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.276338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.276544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.276569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.276727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.276750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.276904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.276927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.277165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.277190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.277402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.277426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.277549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.277573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.277739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.277763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.278026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.278051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.278299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.278324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.278423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.278445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.278541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.278565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.278732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.278756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.278912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.278935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.279108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.279132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.279236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.279318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.279484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.279507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.279664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.279687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.279868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.279890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.280158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.280181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.280349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.280376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.280564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.280586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.280747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.280770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.280896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.280918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.281038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.281061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.281263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.281284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.281436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.281459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.281630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.281652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.281753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.281776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.281968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.281990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.282083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.048 [2024-12-05 14:03:28.282105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.048 qpair failed and we were unable to recover it. 00:31:46.048 [2024-12-05 14:03:28.282321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.282343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.282449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.282473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.282585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.282607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.282701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.282723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.282880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.282902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.283015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.283037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.283260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.283282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.283527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.283549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.283715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.283736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.283844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.283865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.284056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.284078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.284307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.284329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.284560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.284584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.284753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.284774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.284870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.284891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.284995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.285017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.285114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.285136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.285282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.285304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.285401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.285427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.285657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.285679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.285862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.285885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.286000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.286021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.286235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.286256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.286371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.286395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.286589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.286611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.286772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.286798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.286948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.286971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.287135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.287157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.287311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.287333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.287445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.287468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.287653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.287676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.287857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.287880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.288047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.288071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.288286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.288308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.288481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.288504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.288654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.288676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.288832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.288854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.289016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.289039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.049 [2024-12-05 14:03:28.289186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.049 [2024-12-05 14:03:28.289209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.049 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.289432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.289456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.289606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.289630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.289786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.289809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.290051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.290073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.290237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.290259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.290484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.290508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.290751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.290772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.290943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.290966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.291063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.291085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.291343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.291365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.291545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.291567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.291727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.291750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.291970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.291994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.292167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.292190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.292346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.292375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.292621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.292645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.292838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.292860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.292971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.292993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.293156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.293179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.293334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.293355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.293581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.293603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.293702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.293724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.293816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.293837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.294037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.294060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.294207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.294229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.294384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.294407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.294551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.294573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.294721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.294775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.294992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.295026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.295265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.295297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.295476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.295501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.295677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.295699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.295868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.295890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.296036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.296057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.296212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.296234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.296409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.296430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.296597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.296620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.050 [2024-12-05 14:03:28.296768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.050 [2024-12-05 14:03:28.296790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.050 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.297061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.297083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.297189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.297212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.297325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.297347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.297548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.297571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.297724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.297749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.297991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.298014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.298269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.298292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.298505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.298529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.298710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.298736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.298903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.298923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.299170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.299192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.299416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.299438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.299602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.299624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.299774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.299797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.299987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.300008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.300165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.300187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.300401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.300438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.300607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.300629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.300876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.300898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.301059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.301082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.301244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.301267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.301491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.301514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.301788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.301810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.301923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.301946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.302209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.302231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.302385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.302408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.302582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.302604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.302714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.302736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.302981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.303004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.303152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.303175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.303431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.303453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.303694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.303716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.303837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.303858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.303966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.303988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.304100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.304121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.304337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.304384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.304544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.304566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.051 [2024-12-05 14:03:28.304715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.051 [2024-12-05 14:03:28.304738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.051 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.304905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.304929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.305107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.305129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.305402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.305425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.305618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.305641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.305883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.305905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.306094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.306115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.306285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.306307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.306577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.306600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.306818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.306840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.307003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.307026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.307231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.307253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.307403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.307425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.307604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.307626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.307874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.307897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.308011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.308033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.308220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.308243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.308486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.308508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.308606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.308627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.308791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.308813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.309042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.309065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.309225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.309246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.309461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.309484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.309622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.309645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.309812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.309835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.310006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.310030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.310211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.310233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.310451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.310474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.310555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.310578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.310671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.310694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.310909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.310930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.311171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.311193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.311420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.311443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.311604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.052 [2024-12-05 14:03:28.311628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.052 qpair failed and we were unable to recover it. 00:31:46.052 [2024-12-05 14:03:28.311783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.311806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.311921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.311943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.312136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.312158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.312278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.312300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.312516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.312538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.312778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.312801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.313029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.313051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.313293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.313315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.313487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.313510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.313684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.313706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.313886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.313908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.314151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.314173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.314363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.314392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.314551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.314578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.314697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.314718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.314824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.314845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.315093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.315115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.315353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.315384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.315536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.315558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.315720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.315742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.315957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.315980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.316246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.316269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.316433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.316456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.316614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.316636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.316808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.316830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.316999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.317021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.317216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.317238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.317396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.317420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.317519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.317540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.317787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.317810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.318044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.318065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.318179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.318202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.318377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.318400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.318568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.318590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.318828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.318850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.318999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.319021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.319261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.319282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.319384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.319405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.053 [2024-12-05 14:03:28.319622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.053 [2024-12-05 14:03:28.319645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.053 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.319905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.319927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.320163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.320184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.320405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.320430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.320620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.320642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.320890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.320913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.321075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.321099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.321350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.321378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.321617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.321639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.321868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.321889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.322039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.322062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.054 [2024-12-05 14:03:28.322334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.322358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.322619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:46.054 [2024-12-05 14:03:28.322644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.322859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.322881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:46.054 [2024-12-05 14:03:28.323116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.323141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:46.054 [2024-12-05 14:03:28.323294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.323318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.323487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.323510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:46.054 [2024-12-05 14:03:28.323760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.323784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.323967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.323990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.324157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.324179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.324340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.324363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.324482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.324506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.324746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.324768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.324953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.324975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.325216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.325240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.325434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.325458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.325677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.325699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.325857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.325879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.326049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.326071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.326330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.326353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.326577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.326600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.326768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.326791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.326955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.326979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.327145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.327169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.327391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.327414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.327694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.327717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.327930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.054 [2024-12-05 14:03:28.327953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.054 qpair failed and we were unable to recover it. 00:31:46.054 [2024-12-05 14:03:28.328168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.328190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.328304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.328326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.328485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.328508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.328610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.328632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.328854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.328880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.329040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.329063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.329313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.329335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.329529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.329552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.329745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.329768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.330010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.330032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.330280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.330301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.330502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.330527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.330679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.330702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.330945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.330967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.331230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.331252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.331411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.331434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.331585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.331607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.331799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.331822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.332038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.332061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.332249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.332272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.332449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.332472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.332624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.332647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.332741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.332761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.332857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.332880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.332994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.333016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.333169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.333190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.333428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.333453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.333673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.333695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.333889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.333911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.334159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.334182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.334348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.334380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.334508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.334537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.334641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.334664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.334827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.334851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.335043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.335066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.335288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.335311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.335399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.335421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.335588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.335611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.055 qpair failed and we were unable to recover it. 00:31:46.055 [2024-12-05 14:03:28.335776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.055 [2024-12-05 14:03:28.335800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.335895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.335918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.336070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.336091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.336338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.336361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.336516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.336538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.336652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.336673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.336827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.336849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.337146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.337199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.337412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.337449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.337637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.337670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.337863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.337887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.338075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.338097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.338313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.338335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.338541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.338567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.338757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.338779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.338936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.338958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.339057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.339078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.339327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.339350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.339511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.339534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.339721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.339742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.339854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.339874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.340050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.340072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.340240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.340262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.340425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.340447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.340603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.340625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.340835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.340857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.341071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.341093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.341208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.341231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.341498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.341521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.341710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.341732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.341853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.341875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.342048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.342071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.342165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.342185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.342282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.342308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.342543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.342571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.342685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.342708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.342808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.342830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.343018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.343040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.343197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.056 [2024-12-05 14:03:28.343220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.056 qpair failed and we were unable to recover it. 00:31:46.056 [2024-12-05 14:03:28.343396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.343420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.343517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.343539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.343704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.343727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.343821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.343844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.344036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.344061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.344241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.344263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.344441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.344465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.344585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.344608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.344707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.344729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.344925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.344948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.345172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.345195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.345331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.345354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.345553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.345576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.345746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.345768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.345882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.345905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.346120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.346142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.346397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.346419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.346576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.346599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.346716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.346740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.346862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.346883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.347104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.347127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.347396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.347418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.347548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.347574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.347679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.347701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.347919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.347941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.348213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.348235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.348485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.348508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.348682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.348704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.348867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.348890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.349041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.349062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.349251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.349273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.349438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.349463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.349562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.349584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.057 qpair failed and we were unable to recover it. 00:31:46.057 [2024-12-05 14:03:28.349695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.057 [2024-12-05 14:03:28.349718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.349894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.349915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.350158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.350180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.350445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.350484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.350672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.350706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.350841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.350875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.350988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.351013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.351128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.351150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.351320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.351341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.351504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.351529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.351695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.351717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.351932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.351954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.352149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.352171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.352268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.352290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.352513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.352537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.352650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.352672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.352828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.352850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.352952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.352974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.353224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.353247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.353453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.353476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.353667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.353690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.353809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.353831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.353944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.353967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.354127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.354149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.354316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.354338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.354601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.354623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.354727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.354749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.354854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.354876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.355086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.355108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.355278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.355300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.355470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.355496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.355716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.355738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.355950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.355972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.356208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.356230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.356449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.356473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.356634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.356657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.356873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.356895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.357106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.357128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.357303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.058 [2024-12-05 14:03:28.357325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.058 qpair failed and we were unable to recover it. 00:31:46.058 [2024-12-05 14:03:28.357484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.357508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.357661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.357683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.357795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.357818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.357940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.357962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.358128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.358151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.358256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.358278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.358438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.358461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.358582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.358604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.059 [2024-12-05 14:03:28.358763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.358788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.358942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.358964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.359126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.359149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.359392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.359415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.059 [2024-12-05 14:03:28.359567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.359593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.359761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.359783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:46.059 [2024-12-05 14:03:28.359900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.359922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.360083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.360105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.360322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.360350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.360478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.360501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.360672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.360695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.360860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.360883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.361147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.361169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.361328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.361350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.361491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.361513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.361697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.361719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.361842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.361863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.362147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.362169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.362432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.362455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.362642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.362665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.362822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.362845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.362959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.362981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.363213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.363235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.363503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.363526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.059 [2024-12-05 14:03:28.363636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.059 [2024-12-05 14:03:28.363658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.059 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.363849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.363870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.364040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.364063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.364229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.364251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.364447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.364469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.364664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.364686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.364809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.364832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.364999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.365022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.365274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.365298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.365471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.365495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.365679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.365701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.365802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.365824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.365994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.366018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.366166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.366187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.366344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.366366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.366594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.366616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.366835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.366856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.367033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.367056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.367224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.367246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.367413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.367438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.367668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.367690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.367883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.367904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.368101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.368122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.368292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.368314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.368548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.368570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.368691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.368717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.368936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.368959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.369180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.369204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.369356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.369385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.369578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.369602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.369760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.369783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.369959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.369982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.370087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.370109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.370204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.370226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.370331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.370352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.370451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.370474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.370690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.370713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.370933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.370955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.060 [2024-12-05 14:03:28.371133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.060 [2024-12-05 14:03:28.371155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.060 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.371399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.371422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.371581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.371603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.371842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.371865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.372067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.372088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.372201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.372225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.372407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.372431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.372624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.372646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.372806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.372828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.373023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.373045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.373216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.373238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.373455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.373477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.373697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.373720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.373892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.373915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.374103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.374129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.374396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.374419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.374585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.374607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.374850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.374873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.375093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.375115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.375297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.375321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.375515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.375539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.375695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.375717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.375824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.375846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.376029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.376050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.376237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.376259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.376341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.376362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.376587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.376611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.376827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.376851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.377047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.377102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb68000b90 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.377328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.377363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.377510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.377544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.377813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.377845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.061 qpair failed and we were unable to recover it. 00:31:46.061 [2024-12-05 14:03:28.378050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.061 [2024-12-05 14:03:28.378084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.378253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.378285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.378491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.378516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.378695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.378716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.378903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.378926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.379119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.379141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.379356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.379387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.379494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.379517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.379701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.379724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.379873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.379894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.380161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.380184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.380405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.380427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.380549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.380571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.380813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.380834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.380949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.380971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.381213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.381236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.381405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.381427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.381596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.381618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.381710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.381733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.381883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.381906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.382179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.382202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.382401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.382424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.382586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.382610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.382790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.382825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.382961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.382993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.383254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.383287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.383471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.383495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.383718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.383741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.383909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.383931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.384218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.384240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.384459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.384483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.384702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.384725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.384971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.384995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.385154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.385178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.385418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.385441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.385590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.385612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.062 [2024-12-05 14:03:28.385813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.062 [2024-12-05 14:03:28.385834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.062 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.386004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.386027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.386268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.386289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.386404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.386428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.386551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.386573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.386824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.386847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.387128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.387150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.387321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.387342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.387594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.387618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.387788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.387810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.387906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.387927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.388012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.388033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.388191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.388213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.388383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.388406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.388556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.388582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.388743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.388765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.389026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.389049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.389213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.389236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.389506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.389531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.389692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.389715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.389881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.389904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.390013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.390035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.390195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.390217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.390380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.390403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.390572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.390594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.390678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.390699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.390938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.390961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.391122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.391145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.391252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.391274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.391433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.391456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.391619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.391641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.391806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.391827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.392002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.392024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.392242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.063 [2024-12-05 14:03:28.392265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.063 qpair failed and we were unable to recover it. 00:31:46.063 [2024-12-05 14:03:28.392505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.392528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.392639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.392661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.392897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.392919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.393133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.393155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.393378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.393402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.393505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.393528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.393712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.393734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.393975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.393998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.394244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.394267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.394438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.394461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.394722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.394744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.394849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.394873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.395112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.395133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.395300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.395322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.395510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.395533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.395687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.395710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.395862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.395883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.396061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.396082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.396249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.396271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.396466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.396489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.396704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.396725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.396917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.396958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.397233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.397265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.397443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.397478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.397663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.397697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.397936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.397969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.398207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.398240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.398518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.398547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.398717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.398740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.398858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.398882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.398998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.399021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.399180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.399202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.399470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.399492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.399669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.399691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.399946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.399968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.400139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.400161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.400320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.400342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.400621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.400644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.064 qpair failed and we were unable to recover it. 00:31:46.064 [2024-12-05 14:03:28.400757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.064 [2024-12-05 14:03:28.400779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.400875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.400896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.401011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.401033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.401194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.401216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.401510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.401533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.401750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.401773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.401931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.401953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.402117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.402141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.402400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.402423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.402532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.402553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.402736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.402762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.402933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.402955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.403139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.403160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.403381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.403404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.403607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.403629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.403845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.403868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.404037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.404060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.404258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.404280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.404519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.404542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.404708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.404730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.404915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.404938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 Malloc0 00:31:46.065 [2024-12-05 14:03:28.405121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.405143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.405408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.405431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.405544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.405568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.065 [2024-12-05 14:03:28.405821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.405845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.406004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.406026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:46.065 [2024-12-05 14:03:28.406274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.406297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.065 [2024-12-05 14:03:28.406462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.406485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.406663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.406686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:46.065 [2024-12-05 14:03:28.406858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.406881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.407117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.407139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.407397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.407420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.407575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.407597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.407845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.065 [2024-12-05 14:03:28.407867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.065 qpair failed and we were unable to recover it. 00:31:46.065 [2024-12-05 14:03:28.408047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.408069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.408307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.408329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.408510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.408533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.408707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.408729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.408958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.408981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.409199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.409221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.409384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.409407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.409585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.409607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.409847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.409871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.410105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.410128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.410377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.410401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.410645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.410667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.410781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.410804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.411025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.411047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.411203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.411225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.411498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.411526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.411748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.411770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.411984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.412006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.412228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.412249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.412491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.412516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.412643] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.066 [2024-12-05 14:03:28.412705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.412726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.412949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.412971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.413215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.413237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.413410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.413432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.413665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.413688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.413853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.413876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.414032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.414054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.066 qpair failed and we were unable to recover it. 00:31:46.066 [2024-12-05 14:03:28.414273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.066 [2024-12-05 14:03:28.414296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.414449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.414472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.414722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.414744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.414963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.414984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.415152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.415174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.415392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.415415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.415585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.415608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.415717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.415740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.415952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.415974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.416132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.416154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.416445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.416468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.416629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.416652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.416748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.416769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.416930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.416953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.417050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.417071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.417278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.417315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.417510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.417544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.417737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.417769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.418021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.418054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.418244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.418270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.418431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.418454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.418563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.418586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.418846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.418868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.419061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.419082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.419326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.419349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.419555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.419578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.419797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.419820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.420070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.420091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.420252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.420273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.420470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.420494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.420719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.420741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.421057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.421079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.067 [2024-12-05 14:03:28.421321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.421345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.067 [2024-12-05 14:03:28.421514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.067 [2024-12-05 14:03:28.421538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.067 qpair failed and we were unable to recover it. 00:31:46.068 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:46.068 [2024-12-05 14:03:28.421697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.421720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.068 [2024-12-05 14:03:28.421936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.421960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.422060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.422081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:46.068 [2024-12-05 14:03:28.422317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.422341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.422538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.422561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.422783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.422807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.423047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.423074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.423269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.423291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.423531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.423554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.423784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.423805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.424046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.424070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.424255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.424278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.424394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.424418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.424646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.424668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.424916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.424937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.425175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.425198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.425376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.425399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.425562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.425585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.425807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.425829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.426013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.426034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.426150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.426173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.426418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.426441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.426620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.426642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.426871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.426893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.427110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.427133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.427399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.427423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.427573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.427596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.427839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.427862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.428029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.428052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.428279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.428302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.428516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.428540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.428754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.428777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.428997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.429020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.068 [2024-12-05 14:03:28.429281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.429304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.068 [2024-12-05 14:03:28.429457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.068 [2024-12-05 14:03:28.429479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.068 qpair failed and we were unable to recover it. 00:31:46.069 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:46.069 [2024-12-05 14:03:28.429590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.429612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.429782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.429805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.069 [2024-12-05 14:03:28.429918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.429942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:46.069 [2024-12-05 14:03:28.430178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.430202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.430433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.430456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.430694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.430716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.430954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.430976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.431131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.431154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.431322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.431346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.431408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf3b20 (9): Bad file descriptor 00:31:46.069 [2024-12-05 14:03:28.431740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.431779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb5c000b90 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.432031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.432102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.432389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.432428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdb60000b90 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.432539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.432564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.432677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.432699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.432935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.432957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.433182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.433205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.433449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.433472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.433556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.433577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.433675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.433696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.433886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.433907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.434056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.434077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.434245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.434268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.434492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.434516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.434728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.434756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.435017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.435039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.435205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.435227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.435395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.435419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.435599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.435621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.435840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.435862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.436047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.436071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.436299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.436322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.069 [2024-12-05 14:03:28.436488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.069 [2024-12-05 14:03:28.436511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.069 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.436691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.436715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.436930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.436952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.437166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.437190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.437348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.437377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.437549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.437572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.070 [2024-12-05 14:03:28.437680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.437704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.070 [2024-12-05 14:03:28.437948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.437970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:46.070 [2024-12-05 14:03:28.438214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.438237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.438483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.438506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.438745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.438768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.438938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.438961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.439134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.439156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.439334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.439356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.439475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.439498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.439660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.439682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.439991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.440014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.440170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.440193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.440387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.440411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.440633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.070 [2024-12-05 14:03:28.440657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5be0 with addr=10.0.0.2, port=4420 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.440828] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.070 [2024-12-05 14:03:28.443301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.070 [2024-12-05 14:03:28.443404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.070 [2024-12-05 14:03:28.443438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.070 [2024-12-05 14:03:28.443454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.070 [2024-12-05 14:03:28.443469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.070 [2024-12-05 14:03:28.443504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.070 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:46.070 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.070 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:46.070 [2024-12-05 14:03:28.453224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.070 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.070 [2024-12-05 14:03:28.453306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.070 [2024-12-05 14:03:28.453332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.070 [2024-12-05 14:03:28.453344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.070 [2024-12-05 14:03:28.453356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.070 [2024-12-05 14:03:28.453393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 14:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 827526 00:31:46.070 [2024-12-05 14:03:28.463243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.070 [2024-12-05 14:03:28.463318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.070 [2024-12-05 14:03:28.463335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.070 [2024-12-05 14:03:28.463343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.070 [2024-12-05 14:03:28.463354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.070 [2024-12-05 14:03:28.463378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.473253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.070 [2024-12-05 14:03:28.473313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.070 [2024-12-05 14:03:28.473327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.070 [2024-12-05 14:03:28.473333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.070 [2024-12-05 14:03:28.473339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.070 [2024-12-05 14:03:28.473353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.070 qpair failed and we were unable to recover it. 00:31:46.070 [2024-12-05 14:03:28.483250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.070 [2024-12-05 14:03:28.483312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.070 [2024-12-05 14:03:28.483326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.070 [2024-12-05 14:03:28.483332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.070 [2024-12-05 14:03:28.483338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.071 [2024-12-05 14:03:28.483352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.071 qpair failed and we were unable to recover it. 00:31:46.071 [2024-12-05 14:03:28.493235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.071 [2024-12-05 14:03:28.493291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.071 [2024-12-05 14:03:28.493304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.071 [2024-12-05 14:03:28.493310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.071 [2024-12-05 14:03:28.493316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.071 [2024-12-05 14:03:28.493330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.071 qpair failed and we were unable to recover it. 00:31:46.071 [2024-12-05 14:03:28.503253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.071 [2024-12-05 14:03:28.503310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.071 [2024-12-05 14:03:28.503325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.071 [2024-12-05 14:03:28.503331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.071 [2024-12-05 14:03:28.503337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.071 [2024-12-05 14:03:28.503350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.071 qpair failed and we were unable to recover it. 00:31:46.071 [2024-12-05 14:03:28.513377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.071 [2024-12-05 14:03:28.513432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.071 [2024-12-05 14:03:28.513446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.071 [2024-12-05 14:03:28.513453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.071 [2024-12-05 14:03:28.513459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.071 [2024-12-05 14:03:28.513473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.071 qpair failed and we were unable to recover it. 00:31:46.071 [2024-12-05 14:03:28.523331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.071 [2024-12-05 14:03:28.523389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.071 [2024-12-05 14:03:28.523404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.071 [2024-12-05 14:03:28.523410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.071 [2024-12-05 14:03:28.523416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.071 [2024-12-05 14:03:28.523431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.071 qpair failed and we were unable to recover it. 00:31:46.071 [2024-12-05 14:03:28.533383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.071 [2024-12-05 14:03:28.533461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.071 [2024-12-05 14:03:28.533475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.071 [2024-12-05 14:03:28.533481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.071 [2024-12-05 14:03:28.533487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.071 [2024-12-05 14:03:28.533502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.071 qpair failed and we were unable to recover it. 00:31:46.071 [2024-12-05 14:03:28.543380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.071 [2024-12-05 14:03:28.543433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.071 [2024-12-05 14:03:28.543446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.071 [2024-12-05 14:03:28.543453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.071 [2024-12-05 14:03:28.543459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.071 [2024-12-05 14:03:28.543473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.071 qpair failed and we were unable to recover it. 00:31:46.071 [2024-12-05 14:03:28.553403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.071 [2024-12-05 14:03:28.553462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.071 [2024-12-05 14:03:28.553480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.071 [2024-12-05 14:03:28.553488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.071 [2024-12-05 14:03:28.553493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.071 [2024-12-05 14:03:28.553508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.071 qpair failed and we were unable to recover it. 00:31:46.071 [2024-12-05 14:03:28.563428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.071 [2024-12-05 14:03:28.563484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.071 [2024-12-05 14:03:28.563499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.071 [2024-12-05 14:03:28.563505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.071 [2024-12-05 14:03:28.563511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.071 [2024-12-05 14:03:28.563525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.071 qpair failed and we were unable to recover it. 00:31:46.071 [2024-12-05 14:03:28.573439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.071 [2024-12-05 14:03:28.573498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.071 [2024-12-05 14:03:28.573512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.071 [2024-12-05 14:03:28.573518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.071 [2024-12-05 14:03:28.573525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.071 [2024-12-05 14:03:28.573539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.071 qpair failed and we were unable to recover it. 00:31:46.071 [2024-12-05 14:03:28.583469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.071 [2024-12-05 14:03:28.583521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.071 [2024-12-05 14:03:28.583535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.071 [2024-12-05 14:03:28.583542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.071 [2024-12-05 14:03:28.583547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.071 [2024-12-05 14:03:28.583562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.071 qpair failed and we were unable to recover it. 00:31:46.071 [2024-12-05 14:03:28.593527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.071 [2024-12-05 14:03:28.593580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.071 [2024-12-05 14:03:28.593593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.071 [2024-12-05 14:03:28.593599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.072 [2024-12-05 14:03:28.593608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.072 [2024-12-05 14:03:28.593622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.072 qpair failed and we were unable to recover it. 00:31:46.072 [2024-12-05 14:03:28.603542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.072 [2024-12-05 14:03:28.603623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.072 [2024-12-05 14:03:28.603638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.072 [2024-12-05 14:03:28.603644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.072 [2024-12-05 14:03:28.603650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.072 [2024-12-05 14:03:28.603665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.072 qpair failed and we were unable to recover it. 00:31:46.333 [2024-12-05 14:03:28.613607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.333 [2024-12-05 14:03:28.613677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.333 [2024-12-05 14:03:28.613691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.333 [2024-12-05 14:03:28.613698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.333 [2024-12-05 14:03:28.613704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.333 [2024-12-05 14:03:28.613718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.333 qpair failed and we were unable to recover it. 00:31:46.333 [2024-12-05 14:03:28.623589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.333 [2024-12-05 14:03:28.623638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.333 [2024-12-05 14:03:28.623653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.333 [2024-12-05 14:03:28.623660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.333 [2024-12-05 14:03:28.623667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.333 [2024-12-05 14:03:28.623681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.333 qpair failed and we were unable to recover it. 00:31:46.333 [2024-12-05 14:03:28.633687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.333 [2024-12-05 14:03:28.633765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.333 [2024-12-05 14:03:28.633779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.333 [2024-12-05 14:03:28.633786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.333 [2024-12-05 14:03:28.633792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.333 [2024-12-05 14:03:28.633806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.333 qpair failed and we were unable to recover it. 00:31:46.333 [2024-12-05 14:03:28.643675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.333 [2024-12-05 14:03:28.643728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.333 [2024-12-05 14:03:28.643741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.333 [2024-12-05 14:03:28.643748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.333 [2024-12-05 14:03:28.643754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.333 [2024-12-05 14:03:28.643768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.333 qpair failed and we were unable to recover it. 00:31:46.333 [2024-12-05 14:03:28.653684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.333 [2024-12-05 14:03:28.653737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.333 [2024-12-05 14:03:28.653750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.333 [2024-12-05 14:03:28.653757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.333 [2024-12-05 14:03:28.653763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.333 [2024-12-05 14:03:28.653777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.333 qpair failed and we were unable to recover it. 00:31:46.333 [2024-12-05 14:03:28.663698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.333 [2024-12-05 14:03:28.663750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.333 [2024-12-05 14:03:28.663763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.333 [2024-12-05 14:03:28.663769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.333 [2024-12-05 14:03:28.663775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.333 [2024-12-05 14:03:28.663789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.333 qpair failed and we were unable to recover it. 00:31:46.333 [2024-12-05 14:03:28.673738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.333 [2024-12-05 14:03:28.673794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.333 [2024-12-05 14:03:28.673808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.333 [2024-12-05 14:03:28.673814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.333 [2024-12-05 14:03:28.673820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.333 [2024-12-05 14:03:28.673835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.333 qpair failed and we were unable to recover it. 00:31:46.333 [2024-12-05 14:03:28.683763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.333 [2024-12-05 14:03:28.683821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.333 [2024-12-05 14:03:28.683838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.333 [2024-12-05 14:03:28.683845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.333 [2024-12-05 14:03:28.683851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.333 [2024-12-05 14:03:28.683865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.333 qpair failed and we were unable to recover it. 00:31:46.333 [2024-12-05 14:03:28.693788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.334 [2024-12-05 14:03:28.693863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.334 [2024-12-05 14:03:28.693877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.334 [2024-12-05 14:03:28.693883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.334 [2024-12-05 14:03:28.693889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.334 [2024-12-05 14:03:28.693903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.334 qpair failed and we were unable to recover it. 00:31:46.334 [2024-12-05 14:03:28.703808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.334 [2024-12-05 14:03:28.703863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.334 [2024-12-05 14:03:28.703876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.334 [2024-12-05 14:03:28.703883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.334 [2024-12-05 14:03:28.703889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.334 [2024-12-05 14:03:28.703903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.334 qpair failed and we were unable to recover it. 00:31:46.334 [2024-12-05 14:03:28.713856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.334 [2024-12-05 14:03:28.713915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.334 [2024-12-05 14:03:28.713928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.334 [2024-12-05 14:03:28.713935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.334 [2024-12-05 14:03:28.713941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.334 [2024-12-05 14:03:28.713956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.334 qpair failed and we were unable to recover it. 00:31:46.334 [2024-12-05 14:03:28.723876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.334 [2024-12-05 14:03:28.723927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.334 [2024-12-05 14:03:28.723940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.334 [2024-12-05 14:03:28.723947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.334 [2024-12-05 14:03:28.723956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.334 [2024-12-05 14:03:28.723970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.334 qpair failed and we were unable to recover it. 00:31:46.334 [2024-12-05 14:03:28.733896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.334 [2024-12-05 14:03:28.733948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.334 [2024-12-05 14:03:28.733961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.334 [2024-12-05 14:03:28.733968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.334 [2024-12-05 14:03:28.733974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.334 [2024-12-05 14:03:28.733988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.334 qpair failed and we were unable to recover it. 00:31:46.334 [2024-12-05 14:03:28.743920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.334 [2024-12-05 14:03:28.743979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.334 [2024-12-05 14:03:28.743992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.334 [2024-12-05 14:03:28.743998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.334 [2024-12-05 14:03:28.744004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.334 [2024-12-05 14:03:28.744018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.334 qpair failed and we were unable to recover it. 00:31:46.334 [2024-12-05 14:03:28.753951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.334 [2024-12-05 14:03:28.754005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.334 [2024-12-05 14:03:28.754018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.334 [2024-12-05 14:03:28.754024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.334 [2024-12-05 14:03:28.754030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.334 [2024-12-05 14:03:28.754044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.334 qpair failed and we were unable to recover it. 00:31:46.334 [2024-12-05 14:03:28.763959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.334 [2024-12-05 14:03:28.764015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.334 [2024-12-05 14:03:28.764029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.334 [2024-12-05 14:03:28.764036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.334 [2024-12-05 14:03:28.764042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.334 [2024-12-05 14:03:28.764057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.334 qpair failed and we were unable to recover it. 00:31:46.334 [2024-12-05 14:03:28.774002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.334 [2024-12-05 14:03:28.774050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.334 [2024-12-05 14:03:28.774063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.334 [2024-12-05 14:03:28.774070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.334 [2024-12-05 14:03:28.774076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.334 [2024-12-05 14:03:28.774090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.334 qpair failed and we were unable to recover it. 00:31:46.334 [2024-12-05 14:03:28.784030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.334 [2024-12-05 14:03:28.784082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.334 [2024-12-05 14:03:28.784096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.334 [2024-12-05 14:03:28.784102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.334 [2024-12-05 14:03:28.784108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.334 [2024-12-05 14:03:28.784122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.334 qpair failed and we were unable to recover it. 00:31:46.334 [2024-12-05 14:03:28.794053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.334 [2024-12-05 14:03:28.794109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.334 [2024-12-05 14:03:28.794122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.334 [2024-12-05 14:03:28.794129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.334 [2024-12-05 14:03:28.794135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.334 [2024-12-05 14:03:28.794148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.334 qpair failed and we were unable to recover it. 00:31:46.334 [2024-12-05 14:03:28.804117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.334 [2024-12-05 14:03:28.804173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.334 [2024-12-05 14:03:28.804187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.334 [2024-12-05 14:03:28.804194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.334 [2024-12-05 14:03:28.804200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.334 [2024-12-05 14:03:28.804214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.334 qpair failed and we were unable to recover it. 00:31:46.334 [2024-12-05 14:03:28.814128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.334 [2024-12-05 14:03:28.814181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.334 [2024-12-05 14:03:28.814200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.334 [2024-12-05 14:03:28.814207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.334 [2024-12-05 14:03:28.814212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.334 [2024-12-05 14:03:28.814227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.334 qpair failed and we were unable to recover it. 00:31:46.334 [2024-12-05 14:03:28.824140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.335 [2024-12-05 14:03:28.824193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.335 [2024-12-05 14:03:28.824207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.335 [2024-12-05 14:03:28.824213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.335 [2024-12-05 14:03:28.824219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.335 [2024-12-05 14:03:28.824233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.335 qpair failed and we were unable to recover it. 00:31:46.335 [2024-12-05 14:03:28.834185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.335 [2024-12-05 14:03:28.834239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.335 [2024-12-05 14:03:28.834252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.335 [2024-12-05 14:03:28.834258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.335 [2024-12-05 14:03:28.834264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.335 [2024-12-05 14:03:28.834279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.335 qpair failed and we were unable to recover it. 00:31:46.335 [2024-12-05 14:03:28.844201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.335 [2024-12-05 14:03:28.844256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.335 [2024-12-05 14:03:28.844269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.335 [2024-12-05 14:03:28.844275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.335 [2024-12-05 14:03:28.844281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.335 [2024-12-05 14:03:28.844294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.335 qpair failed and we were unable to recover it. 00:31:46.335 [2024-12-05 14:03:28.854232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.335 [2024-12-05 14:03:28.854289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.335 [2024-12-05 14:03:28.854302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.335 [2024-12-05 14:03:28.854309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.335 [2024-12-05 14:03:28.854320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.335 [2024-12-05 14:03:28.854334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.335 qpair failed and we were unable to recover it. 00:31:46.335 [2024-12-05 14:03:28.864262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.335 [2024-12-05 14:03:28.864316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.335 [2024-12-05 14:03:28.864330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.335 [2024-12-05 14:03:28.864337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.335 [2024-12-05 14:03:28.864343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.335 [2024-12-05 14:03:28.864357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.335 qpair failed and we were unable to recover it. 00:31:46.335 [2024-12-05 14:03:28.874298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.335 [2024-12-05 14:03:28.874357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.335 [2024-12-05 14:03:28.874373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.335 [2024-12-05 14:03:28.874380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.335 [2024-12-05 14:03:28.874386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.335 [2024-12-05 14:03:28.874401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.335 qpair failed and we were unable to recover it. 00:31:46.335 [2024-12-05 14:03:28.884317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.335 [2024-12-05 14:03:28.884374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.335 [2024-12-05 14:03:28.884388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.335 [2024-12-05 14:03:28.884395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.335 [2024-12-05 14:03:28.884400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.335 [2024-12-05 14:03:28.884415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.335 qpair failed and we were unable to recover it. 00:31:46.335 [2024-12-05 14:03:28.894391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.335 [2024-12-05 14:03:28.894445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.335 [2024-12-05 14:03:28.894459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.335 [2024-12-05 14:03:28.894465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.335 [2024-12-05 14:03:28.894471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.335 [2024-12-05 14:03:28.894485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.335 qpair failed and we were unable to recover it. 00:31:46.335 [2024-12-05 14:03:28.904372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.335 [2024-12-05 14:03:28.904426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.335 [2024-12-05 14:03:28.904439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.335 [2024-12-05 14:03:28.904446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.335 [2024-12-05 14:03:28.904451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.335 [2024-12-05 14:03:28.904465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.335 qpair failed and we were unable to recover it. 00:31:46.335 [2024-12-05 14:03:28.914424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.335 [2024-12-05 14:03:28.914478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.335 [2024-12-05 14:03:28.914492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.335 [2024-12-05 14:03:28.914498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.335 [2024-12-05 14:03:28.914504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.335 [2024-12-05 14:03:28.914517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.335 qpair failed and we were unable to recover it. 00:31:46.597 [2024-12-05 14:03:28.924458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.597 [2024-12-05 14:03:28.924518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.597 [2024-12-05 14:03:28.924531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.597 [2024-12-05 14:03:28.924537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.597 [2024-12-05 14:03:28.924543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.597 [2024-12-05 14:03:28.924556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.597 qpair failed and we were unable to recover it. 00:31:46.597 [2024-12-05 14:03:28.934463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.597 [2024-12-05 14:03:28.934550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.597 [2024-12-05 14:03:28.934564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.597 [2024-12-05 14:03:28.934570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.597 [2024-12-05 14:03:28.934575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.597 [2024-12-05 14:03:28.934589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.597 qpair failed and we were unable to recover it. 00:31:46.597 [2024-12-05 14:03:28.944533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.597 [2024-12-05 14:03:28.944586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.597 [2024-12-05 14:03:28.944602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.597 [2024-12-05 14:03:28.944608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.597 [2024-12-05 14:03:28.944614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.597 [2024-12-05 14:03:28.944628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.597 qpair failed and we were unable to recover it. 00:31:46.597 [2024-12-05 14:03:28.954460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.597 [2024-12-05 14:03:28.954534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.597 [2024-12-05 14:03:28.954547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.597 [2024-12-05 14:03:28.954553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.597 [2024-12-05 14:03:28.954559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.597 [2024-12-05 14:03:28.954573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.597 qpair failed and we were unable to recover it. 00:31:46.597 [2024-12-05 14:03:28.964557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.597 [2024-12-05 14:03:28.964664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.597 [2024-12-05 14:03:28.964677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.597 [2024-12-05 14:03:28.964683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.597 [2024-12-05 14:03:28.964689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.597 [2024-12-05 14:03:28.964702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.597 qpair failed and we were unable to recover it. 00:31:46.597 [2024-12-05 14:03:28.974552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.597 [2024-12-05 14:03:28.974614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.597 [2024-12-05 14:03:28.974628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.597 [2024-12-05 14:03:28.974634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.597 [2024-12-05 14:03:28.974640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.597 [2024-12-05 14:03:28.974654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.597 qpair failed and we were unable to recover it. 00:31:46.597 [2024-12-05 14:03:28.984601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.597 [2024-12-05 14:03:28.984653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.597 [2024-12-05 14:03:28.984667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.597 [2024-12-05 14:03:28.984673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.597 [2024-12-05 14:03:28.984682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.597 [2024-12-05 14:03:28.984696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.597 qpair failed and we were unable to recover it. 00:31:46.597 [2024-12-05 14:03:28.994632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.597 [2024-12-05 14:03:28.994686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.597 [2024-12-05 14:03:28.994699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.597 [2024-12-05 14:03:28.994705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.597 [2024-12-05 14:03:28.994711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.597 [2024-12-05 14:03:28.994725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.597 qpair failed and we were unable to recover it. 00:31:46.597 [2024-12-05 14:03:29.004661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.597 [2024-12-05 14:03:29.004716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.597 [2024-12-05 14:03:29.004730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.597 [2024-12-05 14:03:29.004736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.597 [2024-12-05 14:03:29.004743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.597 [2024-12-05 14:03:29.004757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.597 qpair failed and we were unable to recover it. 00:31:46.597 [2024-12-05 14:03:29.014682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.597 [2024-12-05 14:03:29.014735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.597 [2024-12-05 14:03:29.014749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.597 [2024-12-05 14:03:29.014756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.597 [2024-12-05 14:03:29.014762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.597 [2024-12-05 14:03:29.014777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.597 qpair failed and we were unable to recover it. 00:31:46.597 [2024-12-05 14:03:29.024701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.597 [2024-12-05 14:03:29.024757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.597 [2024-12-05 14:03:29.024771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.597 [2024-12-05 14:03:29.024777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.597 [2024-12-05 14:03:29.024783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.024797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.598 qpair failed and we were unable to recover it. 00:31:46.598 [2024-12-05 14:03:29.034678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.598 [2024-12-05 14:03:29.034741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.598 [2024-12-05 14:03:29.034754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.598 [2024-12-05 14:03:29.034761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.598 [2024-12-05 14:03:29.034767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.034781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.598 qpair failed and we were unable to recover it. 00:31:46.598 [2024-12-05 14:03:29.044762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.598 [2024-12-05 14:03:29.044817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.598 [2024-12-05 14:03:29.044831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.598 [2024-12-05 14:03:29.044838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.598 [2024-12-05 14:03:29.044844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.044858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.598 qpair failed and we were unable to recover it. 00:31:46.598 [2024-12-05 14:03:29.054787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.598 [2024-12-05 14:03:29.054883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.598 [2024-12-05 14:03:29.054897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.598 [2024-12-05 14:03:29.054904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.598 [2024-12-05 14:03:29.054909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.054923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.598 qpair failed and we were unable to recover it. 00:31:46.598 [2024-12-05 14:03:29.064825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.598 [2024-12-05 14:03:29.064873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.598 [2024-12-05 14:03:29.064886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.598 [2024-12-05 14:03:29.064893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.598 [2024-12-05 14:03:29.064899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.064912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.598 qpair failed and we were unable to recover it. 00:31:46.598 [2024-12-05 14:03:29.074847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.598 [2024-12-05 14:03:29.074924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.598 [2024-12-05 14:03:29.074942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.598 [2024-12-05 14:03:29.074948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.598 [2024-12-05 14:03:29.074954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.074968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.598 qpair failed and we were unable to recover it. 00:31:46.598 [2024-12-05 14:03:29.084881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.598 [2024-12-05 14:03:29.084938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.598 [2024-12-05 14:03:29.084951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.598 [2024-12-05 14:03:29.084958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.598 [2024-12-05 14:03:29.084964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.084978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.598 qpair failed and we were unable to recover it. 00:31:46.598 [2024-12-05 14:03:29.094954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.598 [2024-12-05 14:03:29.095010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.598 [2024-12-05 14:03:29.095023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.598 [2024-12-05 14:03:29.095029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.598 [2024-12-05 14:03:29.095035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.095049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.598 qpair failed and we were unable to recover it. 00:31:46.598 [2024-12-05 14:03:29.104950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.598 [2024-12-05 14:03:29.105004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.598 [2024-12-05 14:03:29.105017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.598 [2024-12-05 14:03:29.105024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.598 [2024-12-05 14:03:29.105030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.105043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.598 qpair failed and we were unable to recover it. 00:31:46.598 [2024-12-05 14:03:29.115009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.598 [2024-12-05 14:03:29.115071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.598 [2024-12-05 14:03:29.115085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.598 [2024-12-05 14:03:29.115091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.598 [2024-12-05 14:03:29.115100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.115114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.598 qpair failed and we were unable to recover it. 00:31:46.598 [2024-12-05 14:03:29.124963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.598 [2024-12-05 14:03:29.125023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.598 [2024-12-05 14:03:29.125036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.598 [2024-12-05 14:03:29.125043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.598 [2024-12-05 14:03:29.125049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.125063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.598 qpair failed and we were unable to recover it. 00:31:46.598 [2024-12-05 14:03:29.134947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.598 [2024-12-05 14:03:29.135009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.598 [2024-12-05 14:03:29.135023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.598 [2024-12-05 14:03:29.135030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.598 [2024-12-05 14:03:29.135036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.135049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.598 qpair failed and we were unable to recover it. 00:31:46.598 [2024-12-05 14:03:29.145013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.598 [2024-12-05 14:03:29.145097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.598 [2024-12-05 14:03:29.145110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.598 [2024-12-05 14:03:29.145116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.598 [2024-12-05 14:03:29.145122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.145136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.598 qpair failed and we were unable to recover it. 00:31:46.598 [2024-12-05 14:03:29.155163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.598 [2024-12-05 14:03:29.155234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.598 [2024-12-05 14:03:29.155247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.598 [2024-12-05 14:03:29.155254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.598 [2024-12-05 14:03:29.155260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.598 [2024-12-05 14:03:29.155274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.599 qpair failed and we were unable to recover it. 00:31:46.599 [2024-12-05 14:03:29.165113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.599 [2024-12-05 14:03:29.165173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.599 [2024-12-05 14:03:29.165188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.599 [2024-12-05 14:03:29.165194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.599 [2024-12-05 14:03:29.165200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.599 [2024-12-05 14:03:29.165214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.599 qpair failed and we were unable to recover it. 00:31:46.599 [2024-12-05 14:03:29.175135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.599 [2024-12-05 14:03:29.175190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.599 [2024-12-05 14:03:29.175204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.599 [2024-12-05 14:03:29.175211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.599 [2024-12-05 14:03:29.175218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.599 [2024-12-05 14:03:29.175232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.599 qpair failed and we were unable to recover it. 00:31:46.860 [2024-12-05 14:03:29.185159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.860 [2024-12-05 14:03:29.185218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.860 [2024-12-05 14:03:29.185234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.860 [2024-12-05 14:03:29.185241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.860 [2024-12-05 14:03:29.185246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.860 [2024-12-05 14:03:29.185261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.860 qpair failed and we were unable to recover it. 00:31:46.860 [2024-12-05 14:03:29.195134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.860 [2024-12-05 14:03:29.195190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.860 [2024-12-05 14:03:29.195205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.860 [2024-12-05 14:03:29.195212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.860 [2024-12-05 14:03:29.195218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.860 [2024-12-05 14:03:29.195232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.860 qpair failed and we were unable to recover it. 00:31:46.860 [2024-12-05 14:03:29.205159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.860 [2024-12-05 14:03:29.205213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.860 [2024-12-05 14:03:29.205230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.860 [2024-12-05 14:03:29.205237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.860 [2024-12-05 14:03:29.205243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.860 [2024-12-05 14:03:29.205258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.860 qpair failed and we were unable to recover it. 00:31:46.860 [2024-12-05 14:03:29.215157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.860 [2024-12-05 14:03:29.215229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.860 [2024-12-05 14:03:29.215243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.860 [2024-12-05 14:03:29.215250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.860 [2024-12-05 14:03:29.215256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.860 [2024-12-05 14:03:29.215270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.860 qpair failed and we were unable to recover it. 00:31:46.860 [2024-12-05 14:03:29.225264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.860 [2024-12-05 14:03:29.225318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.860 [2024-12-05 14:03:29.225332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.860 [2024-12-05 14:03:29.225338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.860 [2024-12-05 14:03:29.225345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.860 [2024-12-05 14:03:29.225359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.860 qpair failed and we were unable to recover it. 00:31:46.860 [2024-12-05 14:03:29.235393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.860 [2024-12-05 14:03:29.235457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.860 [2024-12-05 14:03:29.235471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.860 [2024-12-05 14:03:29.235478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.860 [2024-12-05 14:03:29.235484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.860 [2024-12-05 14:03:29.235498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.860 qpair failed and we were unable to recover it. 00:31:46.860 [2024-12-05 14:03:29.245371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.860 [2024-12-05 14:03:29.245429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.860 [2024-12-05 14:03:29.245443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.860 [2024-12-05 14:03:29.245450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.860 [2024-12-05 14:03:29.245459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.860 [2024-12-05 14:03:29.245473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.860 qpair failed and we were unable to recover it. 00:31:46.860 [2024-12-05 14:03:29.255450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.860 [2024-12-05 14:03:29.255509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.860 [2024-12-05 14:03:29.255523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.860 [2024-12-05 14:03:29.255530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.860 [2024-12-05 14:03:29.255535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.860 [2024-12-05 14:03:29.255550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.860 qpair failed and we were unable to recover it. 00:31:46.861 [2024-12-05 14:03:29.265446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.861 [2024-12-05 14:03:29.265508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.861 [2024-12-05 14:03:29.265521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.861 [2024-12-05 14:03:29.265528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.861 [2024-12-05 14:03:29.265534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.861 [2024-12-05 14:03:29.265548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.861 qpair failed and we were unable to recover it. 00:31:46.861 [2024-12-05 14:03:29.275415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.861 [2024-12-05 14:03:29.275468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.861 [2024-12-05 14:03:29.275482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.861 [2024-12-05 14:03:29.275488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.861 [2024-12-05 14:03:29.275494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.861 [2024-12-05 14:03:29.275508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.861 qpair failed and we were unable to recover it. 00:31:46.861 [2024-12-05 14:03:29.285468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.861 [2024-12-05 14:03:29.285527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.861 [2024-12-05 14:03:29.285542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.861 [2024-12-05 14:03:29.285548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.861 [2024-12-05 14:03:29.285554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.861 [2024-12-05 14:03:29.285570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.861 qpair failed and we were unable to recover it. 00:31:46.861 [2024-12-05 14:03:29.295404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.861 [2024-12-05 14:03:29.295458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.861 [2024-12-05 14:03:29.295475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.861 [2024-12-05 14:03:29.295483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.861 [2024-12-05 14:03:29.295489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.861 [2024-12-05 14:03:29.295504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.861 qpair failed and we were unable to recover it. 00:31:46.861 [2024-12-05 14:03:29.305500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.861 [2024-12-05 14:03:29.305564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.861 [2024-12-05 14:03:29.305578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.861 [2024-12-05 14:03:29.305584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.861 [2024-12-05 14:03:29.305590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.861 [2024-12-05 14:03:29.305604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.861 qpair failed and we were unable to recover it. 00:31:46.861 [2024-12-05 14:03:29.315505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.861 [2024-12-05 14:03:29.315562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.861 [2024-12-05 14:03:29.315576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.861 [2024-12-05 14:03:29.315584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.861 [2024-12-05 14:03:29.315591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.861 [2024-12-05 14:03:29.315606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.861 qpair failed and we were unable to recover it. 00:31:46.861 [2024-12-05 14:03:29.325522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.861 [2024-12-05 14:03:29.325577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.861 [2024-12-05 14:03:29.325591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.861 [2024-12-05 14:03:29.325597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.861 [2024-12-05 14:03:29.325603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.861 [2024-12-05 14:03:29.325617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.861 qpair failed and we were unable to recover it. 00:31:46.861 [2024-12-05 14:03:29.335498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.861 [2024-12-05 14:03:29.335547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.861 [2024-12-05 14:03:29.335565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.861 [2024-12-05 14:03:29.335571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.861 [2024-12-05 14:03:29.335577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.861 [2024-12-05 14:03:29.335591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.861 qpair failed and we were unable to recover it. 00:31:46.861 [2024-12-05 14:03:29.345641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.861 [2024-12-05 14:03:29.345696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.861 [2024-12-05 14:03:29.345710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.861 [2024-12-05 14:03:29.345716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.861 [2024-12-05 14:03:29.345722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.861 [2024-12-05 14:03:29.345737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.861 qpair failed and we were unable to recover it. 00:31:46.861 [2024-12-05 14:03:29.355659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.861 [2024-12-05 14:03:29.355718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.861 [2024-12-05 14:03:29.355731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.861 [2024-12-05 14:03:29.355737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.861 [2024-12-05 14:03:29.355743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.861 [2024-12-05 14:03:29.355757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.861 qpair failed and we were unable to recover it. 00:31:46.861 [2024-12-05 14:03:29.365725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.861 [2024-12-05 14:03:29.365778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.861 [2024-12-05 14:03:29.365791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.861 [2024-12-05 14:03:29.365798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.861 [2024-12-05 14:03:29.365804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.861 [2024-12-05 14:03:29.365818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.861 qpair failed and we were unable to recover it. 00:31:46.861 [2024-12-05 14:03:29.375620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.861 [2024-12-05 14:03:29.375672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.861 [2024-12-05 14:03:29.375686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.861 [2024-12-05 14:03:29.375692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.861 [2024-12-05 14:03:29.375701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.861 [2024-12-05 14:03:29.375715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.861 qpair failed and we were unable to recover it. 00:31:46.861 [2024-12-05 14:03:29.385635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.861 [2024-12-05 14:03:29.385721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.861 [2024-12-05 14:03:29.385736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.861 [2024-12-05 14:03:29.385742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.861 [2024-12-05 14:03:29.385748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.862 [2024-12-05 14:03:29.385762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.862 qpair failed and we were unable to recover it. 00:31:46.862 [2024-12-05 14:03:29.395744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.862 [2024-12-05 14:03:29.395798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.862 [2024-12-05 14:03:29.395812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.862 [2024-12-05 14:03:29.395818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.862 [2024-12-05 14:03:29.395824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.862 [2024-12-05 14:03:29.395837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.862 qpair failed and we were unable to recover it. 00:31:46.862 [2024-12-05 14:03:29.405782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.862 [2024-12-05 14:03:29.405835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.862 [2024-12-05 14:03:29.405850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.862 [2024-12-05 14:03:29.405857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.862 [2024-12-05 14:03:29.405863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.862 [2024-12-05 14:03:29.405876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.862 qpair failed and we were unable to recover it. 00:31:46.862 [2024-12-05 14:03:29.415822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.862 [2024-12-05 14:03:29.415880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.862 [2024-12-05 14:03:29.415894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.862 [2024-12-05 14:03:29.415901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.862 [2024-12-05 14:03:29.415907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.862 [2024-12-05 14:03:29.415921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.862 qpair failed and we were unable to recover it. 00:31:46.862 [2024-12-05 14:03:29.425878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.862 [2024-12-05 14:03:29.425931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.862 [2024-12-05 14:03:29.425945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.862 [2024-12-05 14:03:29.425952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.862 [2024-12-05 14:03:29.425958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.862 [2024-12-05 14:03:29.425972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.862 qpair failed and we were unable to recover it. 00:31:46.862 [2024-12-05 14:03:29.435791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:46.862 [2024-12-05 14:03:29.435852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:46.862 [2024-12-05 14:03:29.435866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:46.862 [2024-12-05 14:03:29.435872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.862 [2024-12-05 14:03:29.435878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:46.862 [2024-12-05 14:03:29.435893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:46.862 qpair failed and we were unable to recover it. 00:31:47.123 [2024-12-05 14:03:29.445819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.123 [2024-12-05 14:03:29.445909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.123 [2024-12-05 14:03:29.445924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.123 [2024-12-05 14:03:29.445931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.123 [2024-12-05 14:03:29.445937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.123 [2024-12-05 14:03:29.445952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.123 qpair failed and we were unable to recover it. 00:31:47.123 [2024-12-05 14:03:29.455827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.123 [2024-12-05 14:03:29.455879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.123 [2024-12-05 14:03:29.455893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.123 [2024-12-05 14:03:29.455900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.123 [2024-12-05 14:03:29.455906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.123 [2024-12-05 14:03:29.455920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.123 qpair failed and we were unable to recover it. 00:31:47.123 [2024-12-05 14:03:29.465896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.123 [2024-12-05 14:03:29.465983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.123 [2024-12-05 14:03:29.466000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.123 [2024-12-05 14:03:29.466006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.123 [2024-12-05 14:03:29.466012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.123 [2024-12-05 14:03:29.466026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.123 qpair failed and we were unable to recover it. 00:31:47.123 [2024-12-05 14:03:29.475967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.123 [2024-12-05 14:03:29.476024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.123 [2024-12-05 14:03:29.476038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.123 [2024-12-05 14:03:29.476045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.123 [2024-12-05 14:03:29.476051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.123 [2024-12-05 14:03:29.476065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.123 qpair failed and we were unable to recover it. 00:31:47.123 [2024-12-05 14:03:29.485935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.123 [2024-12-05 14:03:29.485985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.123 [2024-12-05 14:03:29.485999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.123 [2024-12-05 14:03:29.486005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.123 [2024-12-05 14:03:29.486011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.123 [2024-12-05 14:03:29.486026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.123 qpair failed and we were unable to recover it. 00:31:47.123 [2024-12-05 14:03:29.496037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.123 [2024-12-05 14:03:29.496126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.123 [2024-12-05 14:03:29.496139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.123 [2024-12-05 14:03:29.496146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.123 [2024-12-05 14:03:29.496151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.123 [2024-12-05 14:03:29.496165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.123 qpair failed and we were unable to recover it. 00:31:47.123 [2024-12-05 14:03:29.506051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.123 [2024-12-05 14:03:29.506107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.123 [2024-12-05 14:03:29.506121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.123 [2024-12-05 14:03:29.506128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.123 [2024-12-05 14:03:29.506139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.123 [2024-12-05 14:03:29.506154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.123 qpair failed and we were unable to recover it. 00:31:47.123 [2024-12-05 14:03:29.516018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.123 [2024-12-05 14:03:29.516074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.123 [2024-12-05 14:03:29.516088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.123 [2024-12-05 14:03:29.516094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.123 [2024-12-05 14:03:29.516101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.123 [2024-12-05 14:03:29.516115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.124 [2024-12-05 14:03:29.526029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.124 [2024-12-05 14:03:29.526081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.124 [2024-12-05 14:03:29.526095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.124 [2024-12-05 14:03:29.526102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.124 [2024-12-05 14:03:29.526108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.124 [2024-12-05 14:03:29.526122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.124 [2024-12-05 14:03:29.536105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.124 [2024-12-05 14:03:29.536166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.124 [2024-12-05 14:03:29.536180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.124 [2024-12-05 14:03:29.536187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.124 [2024-12-05 14:03:29.536192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.124 [2024-12-05 14:03:29.536207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.124 [2024-12-05 14:03:29.546152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.124 [2024-12-05 14:03:29.546206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.124 [2024-12-05 14:03:29.546220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.124 [2024-12-05 14:03:29.546226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.124 [2024-12-05 14:03:29.546233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.124 [2024-12-05 14:03:29.546247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.124 [2024-12-05 14:03:29.556247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.124 [2024-12-05 14:03:29.556305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.124 [2024-12-05 14:03:29.556318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.124 [2024-12-05 14:03:29.556324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.124 [2024-12-05 14:03:29.556330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.124 [2024-12-05 14:03:29.556345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.124 [2024-12-05 14:03:29.566207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.124 [2024-12-05 14:03:29.566263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.124 [2024-12-05 14:03:29.566277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.124 [2024-12-05 14:03:29.566285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.124 [2024-12-05 14:03:29.566291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.124 [2024-12-05 14:03:29.566306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.124 [2024-12-05 14:03:29.576219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.124 [2024-12-05 14:03:29.576275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.124 [2024-12-05 14:03:29.576289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.124 [2024-12-05 14:03:29.576296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.124 [2024-12-05 14:03:29.576302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.124 [2024-12-05 14:03:29.576316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.124 [2024-12-05 14:03:29.586258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.124 [2024-12-05 14:03:29.586314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.124 [2024-12-05 14:03:29.586328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.124 [2024-12-05 14:03:29.586335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.124 [2024-12-05 14:03:29.586340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.124 [2024-12-05 14:03:29.586355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.124 [2024-12-05 14:03:29.596297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.124 [2024-12-05 14:03:29.596355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.124 [2024-12-05 14:03:29.596376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.124 [2024-12-05 14:03:29.596383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.124 [2024-12-05 14:03:29.596389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.124 [2024-12-05 14:03:29.596404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.124 [2024-12-05 14:03:29.606360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.124 [2024-12-05 14:03:29.606451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.124 [2024-12-05 14:03:29.606466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.124 [2024-12-05 14:03:29.606473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.124 [2024-12-05 14:03:29.606478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.124 [2024-12-05 14:03:29.606493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.124 [2024-12-05 14:03:29.616352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.124 [2024-12-05 14:03:29.616407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.124 [2024-12-05 14:03:29.616421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.124 [2024-12-05 14:03:29.616428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.124 [2024-12-05 14:03:29.616434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.124 [2024-12-05 14:03:29.616449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.124 [2024-12-05 14:03:29.626371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.124 [2024-12-05 14:03:29.626422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.124 [2024-12-05 14:03:29.626436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.124 [2024-12-05 14:03:29.626442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.124 [2024-12-05 14:03:29.626448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.124 [2024-12-05 14:03:29.626462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.124 [2024-12-05 14:03:29.636428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.124 [2024-12-05 14:03:29.636528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.124 [2024-12-05 14:03:29.636542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.124 [2024-12-05 14:03:29.636548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.124 [2024-12-05 14:03:29.636557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.124 [2024-12-05 14:03:29.636571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.124 [2024-12-05 14:03:29.646449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.124 [2024-12-05 14:03:29.646503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.124 [2024-12-05 14:03:29.646517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.124 [2024-12-05 14:03:29.646524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.124 [2024-12-05 14:03:29.646530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.124 [2024-12-05 14:03:29.646544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.124 qpair failed and we were unable to recover it. 00:31:47.125 [2024-12-05 14:03:29.656516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.125 [2024-12-05 14:03:29.656575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.125 [2024-12-05 14:03:29.656589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.125 [2024-12-05 14:03:29.656596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.125 [2024-12-05 14:03:29.656601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.125 [2024-12-05 14:03:29.656615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.125 qpair failed and we were unable to recover it. 00:31:47.125 [2024-12-05 14:03:29.666487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.125 [2024-12-05 14:03:29.666542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.125 [2024-12-05 14:03:29.666556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.125 [2024-12-05 14:03:29.666563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.125 [2024-12-05 14:03:29.666569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.125 [2024-12-05 14:03:29.666583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.125 qpair failed and we were unable to recover it. 00:31:47.125 [2024-12-05 14:03:29.676583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.125 [2024-12-05 14:03:29.676640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.125 [2024-12-05 14:03:29.676654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.125 [2024-12-05 14:03:29.676660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.125 [2024-12-05 14:03:29.676666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.125 [2024-12-05 14:03:29.676680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.125 qpair failed and we were unable to recover it. 00:31:47.125 [2024-12-05 14:03:29.686574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.125 [2024-12-05 14:03:29.686627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.125 [2024-12-05 14:03:29.686640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.125 [2024-12-05 14:03:29.686647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.125 [2024-12-05 14:03:29.686653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.125 [2024-12-05 14:03:29.686667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.125 qpair failed and we were unable to recover it. 00:31:47.125 [2024-12-05 14:03:29.696565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.125 [2024-12-05 14:03:29.696650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.125 [2024-12-05 14:03:29.696663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.125 [2024-12-05 14:03:29.696670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.125 [2024-12-05 14:03:29.696676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.125 [2024-12-05 14:03:29.696690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.125 qpair failed and we were unable to recover it. 00:31:47.125 [2024-12-05 14:03:29.706548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.125 [2024-12-05 14:03:29.706601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.125 [2024-12-05 14:03:29.706615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.125 [2024-12-05 14:03:29.706622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.125 [2024-12-05 14:03:29.706628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.125 [2024-12-05 14:03:29.706643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.125 qpair failed and we were unable to recover it. 00:31:47.386 [2024-12-05 14:03:29.716657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.386 [2024-12-05 14:03:29.716747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.386 [2024-12-05 14:03:29.716760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.386 [2024-12-05 14:03:29.716767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.386 [2024-12-05 14:03:29.716773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.386 [2024-12-05 14:03:29.716787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.386 qpair failed and we were unable to recover it. 00:31:47.386 [2024-12-05 14:03:29.726681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.386 [2024-12-05 14:03:29.726750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.386 [2024-12-05 14:03:29.726767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.386 [2024-12-05 14:03:29.726774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.386 [2024-12-05 14:03:29.726780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.386 [2024-12-05 14:03:29.726794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.386 qpair failed and we were unable to recover it. 00:31:47.386 [2024-12-05 14:03:29.736746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.386 [2024-12-05 14:03:29.736850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.386 [2024-12-05 14:03:29.736863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.386 [2024-12-05 14:03:29.736869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.386 [2024-12-05 14:03:29.736875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.386 [2024-12-05 14:03:29.736889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.386 qpair failed and we were unable to recover it. 00:31:47.386 [2024-12-05 14:03:29.746730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.386 [2024-12-05 14:03:29.746786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.386 [2024-12-05 14:03:29.746800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.386 [2024-12-05 14:03:29.746806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.386 [2024-12-05 14:03:29.746812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.386 [2024-12-05 14:03:29.746826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.386 qpair failed and we were unable to recover it. 00:31:47.386 [2024-12-05 14:03:29.756760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.386 [2024-12-05 14:03:29.756813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.386 [2024-12-05 14:03:29.756826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.386 [2024-12-05 14:03:29.756833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.386 [2024-12-05 14:03:29.756839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.386 [2024-12-05 14:03:29.756853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.386 qpair failed and we were unable to recover it. 00:31:47.386 [2024-12-05 14:03:29.766791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.766861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.387 [2024-12-05 14:03:29.766874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.387 [2024-12-05 14:03:29.766880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.387 [2024-12-05 14:03:29.766889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.387 [2024-12-05 14:03:29.766904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.387 qpair failed and we were unable to recover it. 00:31:47.387 [2024-12-05 14:03:29.776826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.776876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.387 [2024-12-05 14:03:29.776889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.387 [2024-12-05 14:03:29.776896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.387 [2024-12-05 14:03:29.776901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.387 [2024-12-05 14:03:29.776915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.387 qpair failed and we were unable to recover it. 00:31:47.387 [2024-12-05 14:03:29.786839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.786918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.387 [2024-12-05 14:03:29.786931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.387 [2024-12-05 14:03:29.786938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.387 [2024-12-05 14:03:29.786943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.387 [2024-12-05 14:03:29.786957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.387 qpair failed and we were unable to recover it. 00:31:47.387 [2024-12-05 14:03:29.796882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.796936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.387 [2024-12-05 14:03:29.796949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.387 [2024-12-05 14:03:29.796955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.387 [2024-12-05 14:03:29.796961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.387 [2024-12-05 14:03:29.796975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.387 qpair failed and we were unable to recover it. 00:31:47.387 [2024-12-05 14:03:29.806899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.806954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.387 [2024-12-05 14:03:29.806969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.387 [2024-12-05 14:03:29.806976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.387 [2024-12-05 14:03:29.806982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.387 [2024-12-05 14:03:29.806997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.387 qpair failed and we were unable to recover it. 00:31:47.387 [2024-12-05 14:03:29.816945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.816994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.387 [2024-12-05 14:03:29.817009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.387 [2024-12-05 14:03:29.817015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.387 [2024-12-05 14:03:29.817022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.387 [2024-12-05 14:03:29.817036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.387 qpair failed and we were unable to recover it. 00:31:47.387 [2024-12-05 14:03:29.826873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.826935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.387 [2024-12-05 14:03:29.826949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.387 [2024-12-05 14:03:29.826955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.387 [2024-12-05 14:03:29.826961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.387 [2024-12-05 14:03:29.826976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.387 qpair failed and we were unable to recover it. 00:31:47.387 [2024-12-05 14:03:29.837007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.837063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.387 [2024-12-05 14:03:29.837077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.387 [2024-12-05 14:03:29.837083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.387 [2024-12-05 14:03:29.837089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.387 [2024-12-05 14:03:29.837103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.387 qpair failed and we were unable to recover it. 00:31:47.387 [2024-12-05 14:03:29.847011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.847111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.387 [2024-12-05 14:03:29.847125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.387 [2024-12-05 14:03:29.847131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.387 [2024-12-05 14:03:29.847137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.387 [2024-12-05 14:03:29.847151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.387 qpair failed and we were unable to recover it. 00:31:47.387 [2024-12-05 14:03:29.857047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.857098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.387 [2024-12-05 14:03:29.857115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.387 [2024-12-05 14:03:29.857121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.387 [2024-12-05 14:03:29.857127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.387 [2024-12-05 14:03:29.857141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.387 qpair failed and we were unable to recover it. 00:31:47.387 [2024-12-05 14:03:29.866997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.867050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.387 [2024-12-05 14:03:29.867064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.387 [2024-12-05 14:03:29.867070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.387 [2024-12-05 14:03:29.867076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.387 [2024-12-05 14:03:29.867090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.387 qpair failed and we were unable to recover it. 00:31:47.387 [2024-12-05 14:03:29.877136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.877188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.387 [2024-12-05 14:03:29.877202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.387 [2024-12-05 14:03:29.877208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.387 [2024-12-05 14:03:29.877214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.387 [2024-12-05 14:03:29.877228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.387 qpair failed and we were unable to recover it. 00:31:47.387 [2024-12-05 14:03:29.887147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.887199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.387 [2024-12-05 14:03:29.887212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.387 [2024-12-05 14:03:29.887218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.387 [2024-12-05 14:03:29.887224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.387 [2024-12-05 14:03:29.887239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.387 qpair failed and we were unable to recover it. 00:31:47.387 [2024-12-05 14:03:29.897177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.387 [2024-12-05 14:03:29.897241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.388 [2024-12-05 14:03:29.897254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.388 [2024-12-05 14:03:29.897260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.388 [2024-12-05 14:03:29.897270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.388 [2024-12-05 14:03:29.897284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.388 qpair failed and we were unable to recover it. 00:31:47.388 [2024-12-05 14:03:29.907180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.388 [2024-12-05 14:03:29.907227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.388 [2024-12-05 14:03:29.907241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.388 [2024-12-05 14:03:29.907248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.388 [2024-12-05 14:03:29.907254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.388 [2024-12-05 14:03:29.907267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.388 qpair failed and we were unable to recover it. 00:31:47.388 [2024-12-05 14:03:29.917220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.388 [2024-12-05 14:03:29.917276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.388 [2024-12-05 14:03:29.917290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.388 [2024-12-05 14:03:29.917296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.388 [2024-12-05 14:03:29.917302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.388 [2024-12-05 14:03:29.917316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.388 qpair failed and we were unable to recover it. 00:31:47.388 [2024-12-05 14:03:29.927243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.388 [2024-12-05 14:03:29.927295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.388 [2024-12-05 14:03:29.927310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.388 [2024-12-05 14:03:29.927316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.388 [2024-12-05 14:03:29.927322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.388 [2024-12-05 14:03:29.927336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.388 qpair failed and we were unable to recover it. 00:31:47.388 [2024-12-05 14:03:29.937299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.388 [2024-12-05 14:03:29.937364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.388 [2024-12-05 14:03:29.937382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.388 [2024-12-05 14:03:29.937389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.388 [2024-12-05 14:03:29.937394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.388 [2024-12-05 14:03:29.937408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.388 qpair failed and we were unable to recover it. 00:31:47.388 [2024-12-05 14:03:29.947297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.388 [2024-12-05 14:03:29.947364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.388 [2024-12-05 14:03:29.947381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.388 [2024-12-05 14:03:29.947388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.388 [2024-12-05 14:03:29.947394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.388 [2024-12-05 14:03:29.947408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.388 qpair failed and we were unable to recover it. 00:31:47.388 [2024-12-05 14:03:29.957383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.388 [2024-12-05 14:03:29.957437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.388 [2024-12-05 14:03:29.957450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.388 [2024-12-05 14:03:29.957456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.388 [2024-12-05 14:03:29.957462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.388 [2024-12-05 14:03:29.957476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.388 qpair failed and we were unable to recover it. 00:31:47.388 [2024-12-05 14:03:29.967363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.388 [2024-12-05 14:03:29.967421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.388 [2024-12-05 14:03:29.967434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.388 [2024-12-05 14:03:29.967441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.388 [2024-12-05 14:03:29.967447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.388 [2024-12-05 14:03:29.967461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.388 qpair failed and we were unable to recover it. 00:31:47.649 [2024-12-05 14:03:29.977389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.649 [2024-12-05 14:03:29.977473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.649 [2024-12-05 14:03:29.977488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.649 [2024-12-05 14:03:29.977494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.649 [2024-12-05 14:03:29.977501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.649 [2024-12-05 14:03:29.977517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.649 qpair failed and we were unable to recover it. 00:31:47.649 [2024-12-05 14:03:29.987408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.649 [2024-12-05 14:03:29.987469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.649 [2024-12-05 14:03:29.987486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.649 [2024-12-05 14:03:29.987493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.649 [2024-12-05 14:03:29.987498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.649 [2024-12-05 14:03:29.987513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.649 qpair failed and we were unable to recover it. 00:31:47.649 [2024-12-05 14:03:29.997451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.649 [2024-12-05 14:03:29.997509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.649 [2024-12-05 14:03:29.997523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.649 [2024-12-05 14:03:29.997530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.649 [2024-12-05 14:03:29.997535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.649 [2024-12-05 14:03:29.997550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.649 qpair failed and we were unable to recover it. 00:31:47.649 [2024-12-05 14:03:30.007483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.649 [2024-12-05 14:03:30.007546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.649 [2024-12-05 14:03:30.007565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.649 [2024-12-05 14:03:30.007572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.649 [2024-12-05 14:03:30.007579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.649 [2024-12-05 14:03:30.007596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.649 qpair failed and we were unable to recover it. 00:31:47.649 [2024-12-05 14:03:30.017471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.649 [2024-12-05 14:03:30.017563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.649 [2024-12-05 14:03:30.017579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.649 [2024-12-05 14:03:30.017587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.649 [2024-12-05 14:03:30.017592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.649 [2024-12-05 14:03:30.017607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.649 qpair failed and we were unable to recover it. 00:31:47.649 [2024-12-05 14:03:30.027717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.649 [2024-12-05 14:03:30.027791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.649 [2024-12-05 14:03:30.027807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.649 [2024-12-05 14:03:30.027815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.649 [2024-12-05 14:03:30.027826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.649 [2024-12-05 14:03:30.027843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.649 qpair failed and we were unable to recover it. 00:31:47.649 [2024-12-05 14:03:30.037604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.649 [2024-12-05 14:03:30.037678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.649 [2024-12-05 14:03:30.037692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.649 [2024-12-05 14:03:30.037698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.649 [2024-12-05 14:03:30.037704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.649 [2024-12-05 14:03:30.037719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.649 qpair failed and we were unable to recover it. 00:31:47.649 [2024-12-05 14:03:30.047590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.047646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.650 [2024-12-05 14:03:30.047660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.650 [2024-12-05 14:03:30.047666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.650 [2024-12-05 14:03:30.047673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.650 [2024-12-05 14:03:30.047687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.650 qpair failed and we were unable to recover it. 00:31:47.650 [2024-12-05 14:03:30.057586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.057658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.650 [2024-12-05 14:03:30.057672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.650 [2024-12-05 14:03:30.057679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.650 [2024-12-05 14:03:30.057685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.650 [2024-12-05 14:03:30.057700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.650 qpair failed and we were unable to recover it. 00:31:47.650 [2024-12-05 14:03:30.067712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.067769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.650 [2024-12-05 14:03:30.067784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.650 [2024-12-05 14:03:30.067791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.650 [2024-12-05 14:03:30.067798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.650 [2024-12-05 14:03:30.067814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.650 qpair failed and we were unable to recover it. 00:31:47.650 [2024-12-05 14:03:30.077690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.077749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.650 [2024-12-05 14:03:30.077763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.650 [2024-12-05 14:03:30.077770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.650 [2024-12-05 14:03:30.077776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.650 [2024-12-05 14:03:30.077791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.650 qpair failed and we were unable to recover it. 00:31:47.650 [2024-12-05 14:03:30.087697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.087752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.650 [2024-12-05 14:03:30.087765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.650 [2024-12-05 14:03:30.087772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.650 [2024-12-05 14:03:30.087778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.650 [2024-12-05 14:03:30.087792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.650 qpair failed and we were unable to recover it. 00:31:47.650 [2024-12-05 14:03:30.097677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.097732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.650 [2024-12-05 14:03:30.097746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.650 [2024-12-05 14:03:30.097754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.650 [2024-12-05 14:03:30.097760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.650 [2024-12-05 14:03:30.097775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.650 qpair failed and we were unable to recover it. 00:31:47.650 [2024-12-05 14:03:30.107738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.107792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.650 [2024-12-05 14:03:30.107807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.650 [2024-12-05 14:03:30.107813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.650 [2024-12-05 14:03:30.107819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.650 [2024-12-05 14:03:30.107833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.650 qpair failed and we were unable to recover it. 00:31:47.650 [2024-12-05 14:03:30.117777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.117830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.650 [2024-12-05 14:03:30.117847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.650 [2024-12-05 14:03:30.117854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.650 [2024-12-05 14:03:30.117860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.650 [2024-12-05 14:03:30.117875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.650 qpair failed and we were unable to recover it. 00:31:47.650 [2024-12-05 14:03:30.127856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.127911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.650 [2024-12-05 14:03:30.127925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.650 [2024-12-05 14:03:30.127931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.650 [2024-12-05 14:03:30.127937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.650 [2024-12-05 14:03:30.127952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.650 qpair failed and we were unable to recover it. 00:31:47.650 [2024-12-05 14:03:30.137870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.137922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.650 [2024-12-05 14:03:30.137935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.650 [2024-12-05 14:03:30.137942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.650 [2024-12-05 14:03:30.137948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.650 [2024-12-05 14:03:30.137962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.650 qpair failed and we were unable to recover it. 00:31:47.650 [2024-12-05 14:03:30.147850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.147900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.650 [2024-12-05 14:03:30.147914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.650 [2024-12-05 14:03:30.147920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.650 [2024-12-05 14:03:30.147926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.650 [2024-12-05 14:03:30.147940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.650 qpair failed and we were unable to recover it. 00:31:47.650 [2024-12-05 14:03:30.157892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.157946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.650 [2024-12-05 14:03:30.157960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.650 [2024-12-05 14:03:30.157966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.650 [2024-12-05 14:03:30.157978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.650 [2024-12-05 14:03:30.157992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.650 qpair failed and we were unable to recover it. 00:31:47.650 [2024-12-05 14:03:30.167876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.167932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.650 [2024-12-05 14:03:30.167945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.650 [2024-12-05 14:03:30.167951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.650 [2024-12-05 14:03:30.167957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.650 [2024-12-05 14:03:30.167971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.650 qpair failed and we were unable to recover it. 00:31:47.650 [2024-12-05 14:03:30.177946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.650 [2024-12-05 14:03:30.177999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.651 [2024-12-05 14:03:30.178013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.651 [2024-12-05 14:03:30.178019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.651 [2024-12-05 14:03:30.178025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.651 [2024-12-05 14:03:30.178039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.651 qpair failed and we were unable to recover it. 00:31:47.651 [2024-12-05 14:03:30.187898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.651 [2024-12-05 14:03:30.187976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.651 [2024-12-05 14:03:30.187989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.651 [2024-12-05 14:03:30.187996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.651 [2024-12-05 14:03:30.188002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.651 [2024-12-05 14:03:30.188017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.651 qpair failed and we were unable to recover it. 00:31:47.651 [2024-12-05 14:03:30.198010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.651 [2024-12-05 14:03:30.198064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.651 [2024-12-05 14:03:30.198078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.651 [2024-12-05 14:03:30.198084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.651 [2024-12-05 14:03:30.198090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.651 [2024-12-05 14:03:30.198104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.651 qpair failed and we were unable to recover it. 00:31:47.651 [2024-12-05 14:03:30.208064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.651 [2024-12-05 14:03:30.208126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.651 [2024-12-05 14:03:30.208141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.651 [2024-12-05 14:03:30.208147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.651 [2024-12-05 14:03:30.208153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.651 [2024-12-05 14:03:30.208167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.651 qpair failed and we were unable to recover it. 00:31:47.651 [2024-12-05 14:03:30.218063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.651 [2024-12-05 14:03:30.218114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.651 [2024-12-05 14:03:30.218129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.651 [2024-12-05 14:03:30.218136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.651 [2024-12-05 14:03:30.218142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.651 [2024-12-05 14:03:30.218157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.651 qpair failed and we were unable to recover it. 00:31:47.651 [2024-12-05 14:03:30.228033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.651 [2024-12-05 14:03:30.228131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.651 [2024-12-05 14:03:30.228145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.651 [2024-12-05 14:03:30.228151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.651 [2024-12-05 14:03:30.228157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.651 [2024-12-05 14:03:30.228171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.651 qpair failed and we were unable to recover it. 00:31:47.912 [2024-12-05 14:03:30.238117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.912 [2024-12-05 14:03:30.238177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.912 [2024-12-05 14:03:30.238191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.912 [2024-12-05 14:03:30.238198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.912 [2024-12-05 14:03:30.238204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.912 [2024-12-05 14:03:30.238218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.912 qpair failed and we were unable to recover it. 00:31:47.912 [2024-12-05 14:03:30.248187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.912 [2024-12-05 14:03:30.248259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.912 [2024-12-05 14:03:30.248276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.912 [2024-12-05 14:03:30.248282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.912 [2024-12-05 14:03:30.248288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.912 [2024-12-05 14:03:30.248302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.912 qpair failed and we were unable to recover it. 00:31:47.912 [2024-12-05 14:03:30.258179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.912 [2024-12-05 14:03:30.258239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.912 [2024-12-05 14:03:30.258253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.912 [2024-12-05 14:03:30.258259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.912 [2024-12-05 14:03:30.258265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.912 [2024-12-05 14:03:30.258279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.912 qpair failed and we were unable to recover it. 00:31:47.912 [2024-12-05 14:03:30.268196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.912 [2024-12-05 14:03:30.268257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.912 [2024-12-05 14:03:30.268271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.912 [2024-12-05 14:03:30.268278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.912 [2024-12-05 14:03:30.268283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.912 [2024-12-05 14:03:30.268298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.912 qpair failed and we were unable to recover it. 00:31:47.912 [2024-12-05 14:03:30.278232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.912 [2024-12-05 14:03:30.278287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.912 [2024-12-05 14:03:30.278301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.912 [2024-12-05 14:03:30.278307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.912 [2024-12-05 14:03:30.278313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.912 [2024-12-05 14:03:30.278327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.912 qpair failed and we were unable to recover it. 00:31:47.912 [2024-12-05 14:03:30.288184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.912 [2024-12-05 14:03:30.288236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.912 [2024-12-05 14:03:30.288250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.912 [2024-12-05 14:03:30.288256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.912 [2024-12-05 14:03:30.288265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.912 [2024-12-05 14:03:30.288279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.912 qpair failed and we were unable to recover it. 00:31:47.912 [2024-12-05 14:03:30.298326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.912 [2024-12-05 14:03:30.298406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.912 [2024-12-05 14:03:30.298420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.912 [2024-12-05 14:03:30.298427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.912 [2024-12-05 14:03:30.298433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.912 [2024-12-05 14:03:30.298447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.912 qpair failed and we were unable to recover it. 00:31:47.912 [2024-12-05 14:03:30.308299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.912 [2024-12-05 14:03:30.308350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.912 [2024-12-05 14:03:30.308365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.912 [2024-12-05 14:03:30.308384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.912 [2024-12-05 14:03:30.308391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.912 [2024-12-05 14:03:30.308407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.912 qpair failed and we were unable to recover it. 00:31:47.912 [2024-12-05 14:03:30.318419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.912 [2024-12-05 14:03:30.318478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.912 [2024-12-05 14:03:30.318492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.912 [2024-12-05 14:03:30.318499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.912 [2024-12-05 14:03:30.318505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.912 [2024-12-05 14:03:30.318520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.912 qpair failed and we were unable to recover it. 00:31:47.912 [2024-12-05 14:03:30.328295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.912 [2024-12-05 14:03:30.328350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.912 [2024-12-05 14:03:30.328364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.912 [2024-12-05 14:03:30.328377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.328383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.913 [2024-12-05 14:03:30.328397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.913 qpair failed and we were unable to recover it. 00:31:47.913 [2024-12-05 14:03:30.338425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.913 [2024-12-05 14:03:30.338492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.913 [2024-12-05 14:03:30.338507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.913 [2024-12-05 14:03:30.338514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.338521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.913 [2024-12-05 14:03:30.338535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.913 qpair failed and we were unable to recover it. 00:31:47.913 [2024-12-05 14:03:30.348433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.913 [2024-12-05 14:03:30.348486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.913 [2024-12-05 14:03:30.348500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.913 [2024-12-05 14:03:30.348506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.348512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.913 [2024-12-05 14:03:30.348526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.913 qpair failed and we were unable to recover it. 00:31:47.913 [2024-12-05 14:03:30.358455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.913 [2024-12-05 14:03:30.358509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.913 [2024-12-05 14:03:30.358522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.913 [2024-12-05 14:03:30.358529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.358535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.913 [2024-12-05 14:03:30.358548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.913 qpair failed and we were unable to recover it. 00:31:47.913 [2024-12-05 14:03:30.368557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.913 [2024-12-05 14:03:30.368625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.913 [2024-12-05 14:03:30.368639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.913 [2024-12-05 14:03:30.368646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.368652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.913 [2024-12-05 14:03:30.368666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.913 qpair failed and we were unable to recover it. 00:31:47.913 [2024-12-05 14:03:30.378516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.913 [2024-12-05 14:03:30.378576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.913 [2024-12-05 14:03:30.378592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.913 [2024-12-05 14:03:30.378599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.378605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.913 [2024-12-05 14:03:30.378618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.913 qpair failed and we were unable to recover it. 00:31:47.913 [2024-12-05 14:03:30.388535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.913 [2024-12-05 14:03:30.388589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.913 [2024-12-05 14:03:30.388602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.913 [2024-12-05 14:03:30.388609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.388615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.913 [2024-12-05 14:03:30.388629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.913 qpair failed and we were unable to recover it. 00:31:47.913 [2024-12-05 14:03:30.398567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.913 [2024-12-05 14:03:30.398637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.913 [2024-12-05 14:03:30.398651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.913 [2024-12-05 14:03:30.398658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.398663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.913 [2024-12-05 14:03:30.398679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.913 qpair failed and we were unable to recover it. 00:31:47.913 [2024-12-05 14:03:30.408627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.913 [2024-12-05 14:03:30.408689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.913 [2024-12-05 14:03:30.408704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.913 [2024-12-05 14:03:30.408711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.408716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.913 [2024-12-05 14:03:30.408731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.913 qpair failed and we were unable to recover it. 00:31:47.913 [2024-12-05 14:03:30.418619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.913 [2024-12-05 14:03:30.418675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.913 [2024-12-05 14:03:30.418688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.913 [2024-12-05 14:03:30.418695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.418703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.913 [2024-12-05 14:03:30.418718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.913 qpair failed and we were unable to recover it. 00:31:47.913 [2024-12-05 14:03:30.428653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.913 [2024-12-05 14:03:30.428707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.913 [2024-12-05 14:03:30.428722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.913 [2024-12-05 14:03:30.428728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.428734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.913 [2024-12-05 14:03:30.428749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.913 qpair failed and we were unable to recover it. 00:31:47.913 [2024-12-05 14:03:30.438685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.913 [2024-12-05 14:03:30.438759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.913 [2024-12-05 14:03:30.438772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.913 [2024-12-05 14:03:30.438779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.438785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.913 [2024-12-05 14:03:30.438799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.913 qpair failed and we were unable to recover it. 00:31:47.913 [2024-12-05 14:03:30.448705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.913 [2024-12-05 14:03:30.448761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.913 [2024-12-05 14:03:30.448777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.913 [2024-12-05 14:03:30.448784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.448790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.913 [2024-12-05 14:03:30.448805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.913 qpair failed and we were unable to recover it. 00:31:47.913 [2024-12-05 14:03:30.458727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.913 [2024-12-05 14:03:30.458783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.913 [2024-12-05 14:03:30.458797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.913 [2024-12-05 14:03:30.458803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.913 [2024-12-05 14:03:30.458809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.914 [2024-12-05 14:03:30.458824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.914 qpair failed and we were unable to recover it. 00:31:47.914 [2024-12-05 14:03:30.468787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.914 [2024-12-05 14:03:30.468844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.914 [2024-12-05 14:03:30.468858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.914 [2024-12-05 14:03:30.468864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.914 [2024-12-05 14:03:30.468870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.914 [2024-12-05 14:03:30.468884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.914 qpair failed and we were unable to recover it. 00:31:47.914 [2024-12-05 14:03:30.478760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.914 [2024-12-05 14:03:30.478858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.914 [2024-12-05 14:03:30.478872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.914 [2024-12-05 14:03:30.478878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.914 [2024-12-05 14:03:30.478884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.914 [2024-12-05 14:03:30.478898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.914 qpair failed and we were unable to recover it. 00:31:47.914 [2024-12-05 14:03:30.488796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:47.914 [2024-12-05 14:03:30.488850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:47.914 [2024-12-05 14:03:30.488863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:47.914 [2024-12-05 14:03:30.488870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:47.914 [2024-12-05 14:03:30.488876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:47.914 [2024-12-05 14:03:30.488890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:47.914 qpair failed and we were unable to recover it. 00:31:48.178 [2024-12-05 14:03:30.498833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.179 [2024-12-05 14:03:30.498884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.179 [2024-12-05 14:03:30.498897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.179 [2024-12-05 14:03:30.498903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.179 [2024-12-05 14:03:30.498909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.179 [2024-12-05 14:03:30.498923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.179 qpair failed and we were unable to recover it. 00:31:48.179 [2024-12-05 14:03:30.508863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.179 [2024-12-05 14:03:30.508917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.179 [2024-12-05 14:03:30.508934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.179 [2024-12-05 14:03:30.508940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.179 [2024-12-05 14:03:30.508946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.179 [2024-12-05 14:03:30.508961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.179 qpair failed and we were unable to recover it. 00:31:48.179 [2024-12-05 14:03:30.518903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.179 [2024-12-05 14:03:30.518958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.179 [2024-12-05 14:03:30.518972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.179 [2024-12-05 14:03:30.518978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.179 [2024-12-05 14:03:30.518984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.179 [2024-12-05 14:03:30.518998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.179 qpair failed and we were unable to recover it. 00:31:48.179 [2024-12-05 14:03:30.528915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.179 [2024-12-05 14:03:30.529013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.179 [2024-12-05 14:03:30.529027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.179 [2024-12-05 14:03:30.529033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.179 [2024-12-05 14:03:30.529039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.179 [2024-12-05 14:03:30.529053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.179 qpair failed and we were unable to recover it. 00:31:48.179 [2024-12-05 14:03:30.538896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.179 [2024-12-05 14:03:30.538972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.179 [2024-12-05 14:03:30.538986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.179 [2024-12-05 14:03:30.538992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.179 [2024-12-05 14:03:30.538998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.179 [2024-12-05 14:03:30.539012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.179 qpair failed and we were unable to recover it. 00:31:48.179 [2024-12-05 14:03:30.548910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.179 [2024-12-05 14:03:30.548962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.179 [2024-12-05 14:03:30.548976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.179 [2024-12-05 14:03:30.548982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.179 [2024-12-05 14:03:30.548992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.179 [2024-12-05 14:03:30.549006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.179 qpair failed and we were unable to recover it. 00:31:48.179 [2024-12-05 14:03:30.558998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.179 [2024-12-05 14:03:30.559054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.179 [2024-12-05 14:03:30.559068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.179 [2024-12-05 14:03:30.559075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.179 [2024-12-05 14:03:30.559081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.179 [2024-12-05 14:03:30.559095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.179 qpair failed and we were unable to recover it. 00:31:48.179 [2024-12-05 14:03:30.569053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.179 [2024-12-05 14:03:30.569142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.179 [2024-12-05 14:03:30.569156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.179 [2024-12-05 14:03:30.569163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.179 [2024-12-05 14:03:30.569168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.179 [2024-12-05 14:03:30.569183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.179 qpair failed and we were unable to recover it. 00:31:48.179 [2024-12-05 14:03:30.579047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.179 [2024-12-05 14:03:30.579106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.179 [2024-12-05 14:03:30.579120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.179 [2024-12-05 14:03:30.579129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.179 [2024-12-05 14:03:30.579136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.179 [2024-12-05 14:03:30.579150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.179 qpair failed and we were unable to recover it. 00:31:48.179 [2024-12-05 14:03:30.589083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.179 [2024-12-05 14:03:30.589148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.179 [2024-12-05 14:03:30.589162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.179 [2024-12-05 14:03:30.589168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.179 [2024-12-05 14:03:30.589174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.179 [2024-12-05 14:03:30.589188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.179 qpair failed and we were unable to recover it. 00:31:48.179 [2024-12-05 14:03:30.599137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.179 [2024-12-05 14:03:30.599208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.179 [2024-12-05 14:03:30.599223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.179 [2024-12-05 14:03:30.599231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.179 [2024-12-05 14:03:30.599237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.179 [2024-12-05 14:03:30.599252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.179 qpair failed and we were unable to recover it. 00:31:48.179 [2024-12-05 14:03:30.609067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.179 [2024-12-05 14:03:30.609127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.179 [2024-12-05 14:03:30.609142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.179 [2024-12-05 14:03:30.609148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.179 [2024-12-05 14:03:30.609154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.179 [2024-12-05 14:03:30.609169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.179 qpair failed and we were unable to recover it. 00:31:48.179 [2024-12-05 14:03:30.619168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.180 [2024-12-05 14:03:30.619223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.180 [2024-12-05 14:03:30.619238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.180 [2024-12-05 14:03:30.619245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.180 [2024-12-05 14:03:30.619251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.180 [2024-12-05 14:03:30.619265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.180 qpair failed and we were unable to recover it. 00:31:48.180 [2024-12-05 14:03:30.629243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.180 [2024-12-05 14:03:30.629297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.180 [2024-12-05 14:03:30.629311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.180 [2024-12-05 14:03:30.629318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.180 [2024-12-05 14:03:30.629324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.180 [2024-12-05 14:03:30.629338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.180 qpair failed and we were unable to recover it. 00:31:48.180 [2024-12-05 14:03:30.639232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.180 [2024-12-05 14:03:30.639285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.180 [2024-12-05 14:03:30.639303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.180 [2024-12-05 14:03:30.639311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.180 [2024-12-05 14:03:30.639317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.180 [2024-12-05 14:03:30.639331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.180 qpair failed and we were unable to recover it. 00:31:48.180 [2024-12-05 14:03:30.649259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.180 [2024-12-05 14:03:30.649314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.180 [2024-12-05 14:03:30.649328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.180 [2024-12-05 14:03:30.649334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.180 [2024-12-05 14:03:30.649340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.180 [2024-12-05 14:03:30.649355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.180 qpair failed and we were unable to recover it. 00:31:48.180 [2024-12-05 14:03:30.659296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.180 [2024-12-05 14:03:30.659370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.180 [2024-12-05 14:03:30.659384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.180 [2024-12-05 14:03:30.659390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.180 [2024-12-05 14:03:30.659396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.180 [2024-12-05 14:03:30.659410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.180 qpair failed and we were unable to recover it. 00:31:48.180 [2024-12-05 14:03:30.669299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.180 [2024-12-05 14:03:30.669387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.180 [2024-12-05 14:03:30.669402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.180 [2024-12-05 14:03:30.669408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.180 [2024-12-05 14:03:30.669414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.180 [2024-12-05 14:03:30.669428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.180 qpair failed and we were unable to recover it. 00:31:48.180 [2024-12-05 14:03:30.679278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.180 [2024-12-05 14:03:30.679334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.180 [2024-12-05 14:03:30.679347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.180 [2024-12-05 14:03:30.679354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.180 [2024-12-05 14:03:30.679362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.180 [2024-12-05 14:03:30.679383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.180 qpair failed and we were unable to recover it. 00:31:48.180 [2024-12-05 14:03:30.689350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.180 [2024-12-05 14:03:30.689409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.180 [2024-12-05 14:03:30.689423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.180 [2024-12-05 14:03:30.689430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.180 [2024-12-05 14:03:30.689435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.180 [2024-12-05 14:03:30.689450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.180 qpair failed and we were unable to recover it. 00:31:48.180 [2024-12-05 14:03:30.699390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.180 [2024-12-05 14:03:30.699446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.180 [2024-12-05 14:03:30.699460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.180 [2024-12-05 14:03:30.699466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.180 [2024-12-05 14:03:30.699472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.180 [2024-12-05 14:03:30.699486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.180 qpair failed and we were unable to recover it. 00:31:48.180 [2024-12-05 14:03:30.709411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.180 [2024-12-05 14:03:30.709505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.180 [2024-12-05 14:03:30.709519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.180 [2024-12-05 14:03:30.709526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.181 [2024-12-05 14:03:30.709531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.181 [2024-12-05 14:03:30.709546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.181 qpair failed and we were unable to recover it. 00:31:48.181 [2024-12-05 14:03:30.719436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.181 [2024-12-05 14:03:30.719495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.181 [2024-12-05 14:03:30.719509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.181 [2024-12-05 14:03:30.719515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.181 [2024-12-05 14:03:30.719521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.181 [2024-12-05 14:03:30.719534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.181 qpair failed and we were unable to recover it. 00:31:48.181 [2024-12-05 14:03:30.729471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.181 [2024-12-05 14:03:30.729547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.181 [2024-12-05 14:03:30.729561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.181 [2024-12-05 14:03:30.729567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.181 [2024-12-05 14:03:30.729573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.181 [2024-12-05 14:03:30.729587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.181 qpair failed and we were unable to recover it. 00:31:48.181 [2024-12-05 14:03:30.739470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.181 [2024-12-05 14:03:30.739564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.181 [2024-12-05 14:03:30.739578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.181 [2024-12-05 14:03:30.739584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.181 [2024-12-05 14:03:30.739589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.181 [2024-12-05 14:03:30.739604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.181 qpair failed and we were unable to recover it. 00:31:48.181 [2024-12-05 14:03:30.749563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.181 [2024-12-05 14:03:30.749637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.181 [2024-12-05 14:03:30.749650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.181 [2024-12-05 14:03:30.749657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.181 [2024-12-05 14:03:30.749663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.181 [2024-12-05 14:03:30.749677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.181 qpair failed and we were unable to recover it. 00:31:48.181 [2024-12-05 14:03:30.759510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.181 [2024-12-05 14:03:30.759584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.181 [2024-12-05 14:03:30.759598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.181 [2024-12-05 14:03:30.759604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.181 [2024-12-05 14:03:30.759610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.181 [2024-12-05 14:03:30.759624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.181 qpair failed and we were unable to recover it. 00:31:48.537 [2024-12-05 14:03:30.769601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.537 [2024-12-05 14:03:30.769660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.537 [2024-12-05 14:03:30.769678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.537 [2024-12-05 14:03:30.769686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.537 [2024-12-05 14:03:30.769692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.537 [2024-12-05 14:03:30.769708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.537 qpair failed and we were unable to recover it. 00:31:48.537 [2024-12-05 14:03:30.779680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.537 [2024-12-05 14:03:30.779775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.537 [2024-12-05 14:03:30.779789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.537 [2024-12-05 14:03:30.779796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.537 [2024-12-05 14:03:30.779801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.537 [2024-12-05 14:03:30.779816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.537 qpair failed and we were unable to recover it. 00:31:48.537 [2024-12-05 14:03:30.789577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.537 [2024-12-05 14:03:30.789669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.537 [2024-12-05 14:03:30.789683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.537 [2024-12-05 14:03:30.789690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.537 [2024-12-05 14:03:30.789695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.537 [2024-12-05 14:03:30.789709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.537 qpair failed and we were unable to recover it. 00:31:48.537 [2024-12-05 14:03:30.799620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.537 [2024-12-05 14:03:30.799678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.537 [2024-12-05 14:03:30.799694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.537 [2024-12-05 14:03:30.799701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.537 [2024-12-05 14:03:30.799707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.537 [2024-12-05 14:03:30.799722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.537 qpair failed and we were unable to recover it. 00:31:48.537 [2024-12-05 14:03:30.809702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.537 [2024-12-05 14:03:30.809786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.537 [2024-12-05 14:03:30.809800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.537 [2024-12-05 14:03:30.809806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.537 [2024-12-05 14:03:30.809817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.537 [2024-12-05 14:03:30.809831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.537 qpair failed and we were unable to recover it. 00:31:48.537 [2024-12-05 14:03:30.819659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.537 [2024-12-05 14:03:30.819718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.537 [2024-12-05 14:03:30.819733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.537 [2024-12-05 14:03:30.819739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.537 [2024-12-05 14:03:30.819745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.537 [2024-12-05 14:03:30.819759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.537 qpair failed and we were unable to recover it. 00:31:48.537 [2024-12-05 14:03:30.829689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.537 [2024-12-05 14:03:30.829739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.537 [2024-12-05 14:03:30.829753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.537 [2024-12-05 14:03:30.829759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.537 [2024-12-05 14:03:30.829765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.537 [2024-12-05 14:03:30.829779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.537 qpair failed and we were unable to recover it. 00:31:48.537 [2024-12-05 14:03:30.839769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.537 [2024-12-05 14:03:30.839830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.537 [2024-12-05 14:03:30.839843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.537 [2024-12-05 14:03:30.839850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.537 [2024-12-05 14:03:30.839856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.537 [2024-12-05 14:03:30.839870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.537 qpair failed and we were unable to recover it. 00:31:48.537 [2024-12-05 14:03:30.849799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.537 [2024-12-05 14:03:30.849888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.537 [2024-12-05 14:03:30.849902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.537 [2024-12-05 14:03:30.849908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.537 [2024-12-05 14:03:30.849914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.537 [2024-12-05 14:03:30.849928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.537 qpair failed and we were unable to recover it. 00:31:48.537 [2024-12-05 14:03:30.859764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.537 [2024-12-05 14:03:30.859853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.537 [2024-12-05 14:03:30.859867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.537 [2024-12-05 14:03:30.859873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.537 [2024-12-05 14:03:30.859879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.537 [2024-12-05 14:03:30.859893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.537 qpair failed and we were unable to recover it. 00:31:48.537 [2024-12-05 14:03:30.869935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.537 [2024-12-05 14:03:30.870020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.537 [2024-12-05 14:03:30.870033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.537 [2024-12-05 14:03:30.870039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.537 [2024-12-05 14:03:30.870045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.537 [2024-12-05 14:03:30.870059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.537 qpair failed and we were unable to recover it. 00:31:48.537 [2024-12-05 14:03:30.879865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.537 [2024-12-05 14:03:30.879944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.537 [2024-12-05 14:03:30.879958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.537 [2024-12-05 14:03:30.879964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.537 [2024-12-05 14:03:30.879970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.537 [2024-12-05 14:03:30.879984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.537 qpair failed and we were unable to recover it. 00:31:48.537 [2024-12-05 14:03:30.889943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:30.889995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:30.890008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:30.890015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:30.890021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:30.890034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:30.899939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:30.899990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:30.900007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:30.900013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:30.900019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:30.900033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:30.909966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:30.910018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:30.910032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:30.910039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:30.910045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:30.910059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:30.919959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:30.920016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:30.920030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:30.920037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:30.920042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:30.920057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:30.930078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:30.930147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:30.930161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:30.930167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:30.930173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:30.930188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:30.939990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:30.940048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:30.940062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:30.940068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:30.940078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:30.940092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:30.950120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:30.950181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:30.950195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:30.950201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:30.950207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:30.950221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:30.960160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:30.960239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:30.960253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:30.960259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:30.960265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:30.960279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:30.970154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:30.970205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:30.970219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:30.970225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:30.970231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:30.970245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:30.980166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:30.980251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:30.980265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:30.980271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:30.980277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:30.980291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:30.990269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:30.990324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:30.990337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:30.990344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:30.990350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:30.990364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:31.000255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:31.000309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:31.000323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:31.000330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:31.000336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:31.000350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:31.010262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:31.010336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:31.010350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:31.010357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:31.010363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:31.010381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:31.020316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:31.020389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:31.020403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:31.020409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:31.020415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:31.020429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:31.030259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:31.030315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:31.030332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:31.030339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:31.030345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:31.030359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:31.040363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:31.040425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:31.040438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:31.040445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:31.040451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:31.040464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:31.050393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:31.050469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:31.050483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:31.050490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:31.050496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:31.050510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:31.060376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:31.060470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:31.060484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:31.060490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:31.060496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:31.060511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:31.070531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:31.070614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:31.070628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:31.070634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:31.070644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:31.070659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:31.080486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:31.080541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:31.080555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:31.080561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:31.080567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:31.080582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:31.090572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:31.090637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:31.090651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:31.090657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:31.090663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:31.090677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:31.100563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:31.100621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:31.100634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:31.100641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:31.100647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:31.100661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.538 [2024-12-05 14:03:31.110567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.538 [2024-12-05 14:03:31.110623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.538 [2024-12-05 14:03:31.110636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.538 [2024-12-05 14:03:31.110643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.538 [2024-12-05 14:03:31.110649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.538 [2024-12-05 14:03:31.110663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.538 qpair failed and we were unable to recover it. 00:31:48.814 [2024-12-05 14:03:31.120620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.814 [2024-12-05 14:03:31.120690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.814 [2024-12-05 14:03:31.120704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.814 [2024-12-05 14:03:31.120711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.814 [2024-12-05 14:03:31.120716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.814 [2024-12-05 14:03:31.120730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.814 qpair failed and we were unable to recover it. 00:31:48.814 [2024-12-05 14:03:31.130627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.814 [2024-12-05 14:03:31.130683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.814 [2024-12-05 14:03:31.130697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.814 [2024-12-05 14:03:31.130704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.814 [2024-12-05 14:03:31.130710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.814 [2024-12-05 14:03:31.130724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.814 qpair failed and we were unable to recover it. 00:31:48.814 [2024-12-05 14:03:31.140653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.814 [2024-12-05 14:03:31.140708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.814 [2024-12-05 14:03:31.140722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.814 [2024-12-05 14:03:31.140729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.814 [2024-12-05 14:03:31.140734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.814 [2024-12-05 14:03:31.140748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.814 qpair failed and we were unable to recover it. 00:31:48.814 [2024-12-05 14:03:31.150675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.814 [2024-12-05 14:03:31.150728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.814 [2024-12-05 14:03:31.150742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.814 [2024-12-05 14:03:31.150748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.814 [2024-12-05 14:03:31.150754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.814 [2024-12-05 14:03:31.150767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.814 qpair failed and we were unable to recover it. 00:31:48.814 [2024-12-05 14:03:31.160700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.814 [2024-12-05 14:03:31.160756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.814 [2024-12-05 14:03:31.160772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.814 [2024-12-05 14:03:31.160778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.814 [2024-12-05 14:03:31.160784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.814 [2024-12-05 14:03:31.160798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.814 qpair failed and we were unable to recover it. 00:31:48.814 [2024-12-05 14:03:31.170790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.814 [2024-12-05 14:03:31.170851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.814 [2024-12-05 14:03:31.170864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.814 [2024-12-05 14:03:31.170871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.814 [2024-12-05 14:03:31.170877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.814 [2024-12-05 14:03:31.170890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.814 qpair failed and we were unable to recover it. 00:31:48.814 [2024-12-05 14:03:31.180772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.814 [2024-12-05 14:03:31.180823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.814 [2024-12-05 14:03:31.180836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.814 [2024-12-05 14:03:31.180843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.814 [2024-12-05 14:03:31.180849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.814 [2024-12-05 14:03:31.180862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.814 qpair failed and we were unable to recover it. 00:31:48.814 [2024-12-05 14:03:31.190781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.814 [2024-12-05 14:03:31.190837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.814 [2024-12-05 14:03:31.190850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.814 [2024-12-05 14:03:31.190856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.814 [2024-12-05 14:03:31.190862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.814 [2024-12-05 14:03:31.190876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.814 qpair failed and we were unable to recover it. 00:31:48.814 [2024-12-05 14:03:31.200869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.814 [2024-12-05 14:03:31.200930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.814 [2024-12-05 14:03:31.200944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.814 [2024-12-05 14:03:31.200951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.814 [2024-12-05 14:03:31.200959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.814 [2024-12-05 14:03:31.200974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.814 qpair failed and we were unable to recover it. 00:31:48.814 [2024-12-05 14:03:31.210847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.814 [2024-12-05 14:03:31.210906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.210920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.815 [2024-12-05 14:03:31.210927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.815 [2024-12-05 14:03:31.210933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.815 [2024-12-05 14:03:31.210947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.815 qpair failed and we were unable to recover it. 00:31:48.815 [2024-12-05 14:03:31.220873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.815 [2024-12-05 14:03:31.220926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.220939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.815 [2024-12-05 14:03:31.220946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.815 [2024-12-05 14:03:31.220952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.815 [2024-12-05 14:03:31.220966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.815 qpair failed and we were unable to recover it. 00:31:48.815 [2024-12-05 14:03:31.230894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.815 [2024-12-05 14:03:31.230942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.230955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.815 [2024-12-05 14:03:31.230962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.815 [2024-12-05 14:03:31.230968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.815 [2024-12-05 14:03:31.230982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.815 qpair failed and we were unable to recover it. 00:31:48.815 [2024-12-05 14:03:31.240937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.815 [2024-12-05 14:03:31.240998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.241011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.815 [2024-12-05 14:03:31.241018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.815 [2024-12-05 14:03:31.241023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.815 [2024-12-05 14:03:31.241038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.815 qpair failed and we were unable to recover it. 00:31:48.815 [2024-12-05 14:03:31.250965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.815 [2024-12-05 14:03:31.251019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.251032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.815 [2024-12-05 14:03:31.251038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.815 [2024-12-05 14:03:31.251044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.815 [2024-12-05 14:03:31.251058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.815 qpair failed and we were unable to recover it. 00:31:48.815 [2024-12-05 14:03:31.260985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.815 [2024-12-05 14:03:31.261036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.261050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.815 [2024-12-05 14:03:31.261057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.815 [2024-12-05 14:03:31.261063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.815 [2024-12-05 14:03:31.261077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.815 qpair failed and we were unable to recover it. 00:31:48.815 [2024-12-05 14:03:31.270982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.815 [2024-12-05 14:03:31.271036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.271049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.815 [2024-12-05 14:03:31.271057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.815 [2024-12-05 14:03:31.271063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.815 [2024-12-05 14:03:31.271077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.815 qpair failed and we were unable to recover it. 00:31:48.815 [2024-12-05 14:03:31.281085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.815 [2024-12-05 14:03:31.281138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.281151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.815 [2024-12-05 14:03:31.281158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.815 [2024-12-05 14:03:31.281164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.815 [2024-12-05 14:03:31.281178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.815 qpair failed and we were unable to recover it. 00:31:48.815 [2024-12-05 14:03:31.291093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.815 [2024-12-05 14:03:31.291157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.291174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.815 [2024-12-05 14:03:31.291181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.815 [2024-12-05 14:03:31.291186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.815 [2024-12-05 14:03:31.291201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.815 qpair failed and we were unable to recover it. 00:31:48.815 [2024-12-05 14:03:31.301141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.815 [2024-12-05 14:03:31.301195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.301210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.815 [2024-12-05 14:03:31.301216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.815 [2024-12-05 14:03:31.301222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.815 [2024-12-05 14:03:31.301236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.815 qpair failed and we were unable to recover it. 00:31:48.815 [2024-12-05 14:03:31.311174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.815 [2024-12-05 14:03:31.311258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.311272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.815 [2024-12-05 14:03:31.311279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.815 [2024-12-05 14:03:31.311285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.815 [2024-12-05 14:03:31.311298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.815 qpair failed and we were unable to recover it. 00:31:48.815 [2024-12-05 14:03:31.321167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.815 [2024-12-05 14:03:31.321227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.321242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.815 [2024-12-05 14:03:31.321249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.815 [2024-12-05 14:03:31.321255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.815 [2024-12-05 14:03:31.321270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.815 qpair failed and we were unable to recover it. 00:31:48.815 [2024-12-05 14:03:31.331176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.815 [2024-12-05 14:03:31.331236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.331250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.815 [2024-12-05 14:03:31.331257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.815 [2024-12-05 14:03:31.331266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.815 [2024-12-05 14:03:31.331280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.815 qpair failed and we were unable to recover it. 00:31:48.815 [2024-12-05 14:03:31.341215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.815 [2024-12-05 14:03:31.341270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.815 [2024-12-05 14:03:31.341284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.816 [2024-12-05 14:03:31.341291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.816 [2024-12-05 14:03:31.341297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.816 [2024-12-05 14:03:31.341311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.816 qpair failed and we were unable to recover it. 00:31:48.816 [2024-12-05 14:03:31.351285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.816 [2024-12-05 14:03:31.351335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.816 [2024-12-05 14:03:31.351349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.816 [2024-12-05 14:03:31.351355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.816 [2024-12-05 14:03:31.351363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.816 [2024-12-05 14:03:31.351382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.816 qpair failed and we were unable to recover it. 00:31:48.816 [2024-12-05 14:03:31.361211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.816 [2024-12-05 14:03:31.361266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.816 [2024-12-05 14:03:31.361280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.816 [2024-12-05 14:03:31.361287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.816 [2024-12-05 14:03:31.361293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.816 [2024-12-05 14:03:31.361306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.816 qpair failed and we were unable to recover it. 00:31:48.816 [2024-12-05 14:03:31.371298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.816 [2024-12-05 14:03:31.371353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.816 [2024-12-05 14:03:31.371370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.816 [2024-12-05 14:03:31.371378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.816 [2024-12-05 14:03:31.371384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.816 [2024-12-05 14:03:31.371398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.816 qpair failed and we were unable to recover it. 00:31:48.816 [2024-12-05 14:03:31.381348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.816 [2024-12-05 14:03:31.381414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.816 [2024-12-05 14:03:31.381428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.816 [2024-12-05 14:03:31.381434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.816 [2024-12-05 14:03:31.381440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.816 [2024-12-05 14:03:31.381453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.816 qpair failed and we were unable to recover it. 00:31:48.816 [2024-12-05 14:03:31.391399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:48.816 [2024-12-05 14:03:31.391455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:48.816 [2024-12-05 14:03:31.391469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:48.816 [2024-12-05 14:03:31.391475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:48.816 [2024-12-05 14:03:31.391481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:48.816 [2024-12-05 14:03:31.391495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:48.816 qpair failed and we were unable to recover it. 00:31:49.075 [2024-12-05 14:03:31.401399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.075 [2024-12-05 14:03:31.401452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.075 [2024-12-05 14:03:31.401466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.075 [2024-12-05 14:03:31.401473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.075 [2024-12-05 14:03:31.401478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.075 [2024-12-05 14:03:31.401493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.075 qpair failed and we were unable to recover it. 00:31:49.075 [2024-12-05 14:03:31.411411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.075 [2024-12-05 14:03:31.411466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.075 [2024-12-05 14:03:31.411480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.075 [2024-12-05 14:03:31.411486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.075 [2024-12-05 14:03:31.411492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.075 [2024-12-05 14:03:31.411507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.075 qpair failed and we were unable to recover it. 00:31:49.075 [2024-12-05 14:03:31.421429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.075 [2024-12-05 14:03:31.421483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.075 [2024-12-05 14:03:31.421501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.075 [2024-12-05 14:03:31.421508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.075 [2024-12-05 14:03:31.421514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.075 [2024-12-05 14:03:31.421528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.075 qpair failed and we were unable to recover it. 00:31:49.075 [2024-12-05 14:03:31.431458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.075 [2024-12-05 14:03:31.431534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.075 [2024-12-05 14:03:31.431548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.075 [2024-12-05 14:03:31.431554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.075 [2024-12-05 14:03:31.431560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.075 [2024-12-05 14:03:31.431576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.075 qpair failed and we were unable to recover it. 00:31:49.075 [2024-12-05 14:03:31.441516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.075 [2024-12-05 14:03:31.441574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.075 [2024-12-05 14:03:31.441589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.076 [2024-12-05 14:03:31.441597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.076 [2024-12-05 14:03:31.441603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.076 [2024-12-05 14:03:31.441619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.076 qpair failed and we were unable to recover it. 00:31:49.076 [2024-12-05 14:03:31.451530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.076 [2024-12-05 14:03:31.451614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.076 [2024-12-05 14:03:31.451628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.076 [2024-12-05 14:03:31.451635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.076 [2024-12-05 14:03:31.451641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.076 [2024-12-05 14:03:31.451656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.076 qpair failed and we were unable to recover it. 00:31:49.076 [2024-12-05 14:03:31.461508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.076 [2024-12-05 14:03:31.461601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.076 [2024-12-05 14:03:31.461615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.076 [2024-12-05 14:03:31.461621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.076 [2024-12-05 14:03:31.461632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.076 [2024-12-05 14:03:31.461647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.076 qpair failed and we were unable to recover it. 00:31:49.076 [2024-12-05 14:03:31.471577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.076 [2024-12-05 14:03:31.471630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.076 [2024-12-05 14:03:31.471645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.076 [2024-12-05 14:03:31.471652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.076 [2024-12-05 14:03:31.471657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.076 [2024-12-05 14:03:31.471671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.076 qpair failed and we were unable to recover it. 00:31:49.076 [2024-12-05 14:03:31.481616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.076 [2024-12-05 14:03:31.481670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.076 [2024-12-05 14:03:31.481684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.076 [2024-12-05 14:03:31.481690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.076 [2024-12-05 14:03:31.481696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.076 [2024-12-05 14:03:31.481711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.076 qpair failed and we were unable to recover it. 00:31:49.076 [2024-12-05 14:03:31.491696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.076 [2024-12-05 14:03:31.491755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.076 [2024-12-05 14:03:31.491769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.076 [2024-12-05 14:03:31.491775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.076 [2024-12-05 14:03:31.491781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.076 [2024-12-05 14:03:31.491794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.076 qpair failed and we were unable to recover it. 00:31:49.076 [2024-12-05 14:03:31.501711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.076 [2024-12-05 14:03:31.501780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.076 [2024-12-05 14:03:31.501794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.076 [2024-12-05 14:03:31.501800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.076 [2024-12-05 14:03:31.501806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.076 [2024-12-05 14:03:31.501821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.076 qpair failed and we were unable to recover it. 00:31:49.076 [2024-12-05 14:03:31.511685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.076 [2024-12-05 14:03:31.511739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.076 [2024-12-05 14:03:31.511753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.076 [2024-12-05 14:03:31.511760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.076 [2024-12-05 14:03:31.511766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.076 [2024-12-05 14:03:31.511780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.076 qpair failed and we were unable to recover it. 00:31:49.076 [2024-12-05 14:03:31.521743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.076 [2024-12-05 14:03:31.521802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.076 [2024-12-05 14:03:31.521816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.076 [2024-12-05 14:03:31.521822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.076 [2024-12-05 14:03:31.521828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.076 [2024-12-05 14:03:31.521841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.076 qpair failed and we were unable to recover it. 00:31:49.076 [2024-12-05 14:03:31.531763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.076 [2024-12-05 14:03:31.531832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.076 [2024-12-05 14:03:31.531845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.076 [2024-12-05 14:03:31.531851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.076 [2024-12-05 14:03:31.531857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.076 [2024-12-05 14:03:31.531871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.076 qpair failed and we were unable to recover it. 00:31:49.076 [2024-12-05 14:03:31.541810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.076 [2024-12-05 14:03:31.541859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.076 [2024-12-05 14:03:31.541873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.076 [2024-12-05 14:03:31.541879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.076 [2024-12-05 14:03:31.541885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.076 [2024-12-05 14:03:31.541899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.076 qpair failed and we were unable to recover it. 00:31:49.076 [2024-12-05 14:03:31.551800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.076 [2024-12-05 14:03:31.551855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.076 [2024-12-05 14:03:31.551871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.076 [2024-12-05 14:03:31.551878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.076 [2024-12-05 14:03:31.551884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.076 [2024-12-05 14:03:31.551898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.076 qpair failed and we were unable to recover it. 00:31:49.076 [2024-12-05 14:03:31.561842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.076 [2024-12-05 14:03:31.561900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.076 [2024-12-05 14:03:31.561913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.076 [2024-12-05 14:03:31.561920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.076 [2024-12-05 14:03:31.561925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.076 [2024-12-05 14:03:31.561939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.076 qpair failed and we were unable to recover it. 00:31:49.076 [2024-12-05 14:03:31.571862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.076 [2024-12-05 14:03:31.571932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.077 [2024-12-05 14:03:31.571945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.077 [2024-12-05 14:03:31.571952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.077 [2024-12-05 14:03:31.571957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.077 [2024-12-05 14:03:31.571971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.077 qpair failed and we were unable to recover it. 00:31:49.077 [2024-12-05 14:03:31.581910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.077 [2024-12-05 14:03:31.581986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.077 [2024-12-05 14:03:31.581999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.077 [2024-12-05 14:03:31.582006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.077 [2024-12-05 14:03:31.582012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.077 [2024-12-05 14:03:31.582026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.077 qpair failed and we were unable to recover it. 00:31:49.077 [2024-12-05 14:03:31.591931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.077 [2024-12-05 14:03:31.592005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.077 [2024-12-05 14:03:31.592019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.077 [2024-12-05 14:03:31.592025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.077 [2024-12-05 14:03:31.592033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.077 [2024-12-05 14:03:31.592048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.077 qpair failed and we were unable to recover it. 00:31:49.077 [2024-12-05 14:03:31.601988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.077 [2024-12-05 14:03:31.602044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.077 [2024-12-05 14:03:31.602058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.077 [2024-12-05 14:03:31.602065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.077 [2024-12-05 14:03:31.602071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.077 [2024-12-05 14:03:31.602086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.077 qpair failed and we were unable to recover it. 00:31:49.077 [2024-12-05 14:03:31.611992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.077 [2024-12-05 14:03:31.612068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.077 [2024-12-05 14:03:31.612083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.077 [2024-12-05 14:03:31.612090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.077 [2024-12-05 14:03:31.612095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.077 [2024-12-05 14:03:31.612110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.077 qpair failed and we were unable to recover it. 00:31:49.077 [2024-12-05 14:03:31.622020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.077 [2024-12-05 14:03:31.622086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.077 [2024-12-05 14:03:31.622099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.077 [2024-12-05 14:03:31.622106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.077 [2024-12-05 14:03:31.622112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.077 [2024-12-05 14:03:31.622126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.077 qpair failed and we were unable to recover it. 00:31:49.077 [2024-12-05 14:03:31.632069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.077 [2024-12-05 14:03:31.632129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.077 [2024-12-05 14:03:31.632144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.077 [2024-12-05 14:03:31.632151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.077 [2024-12-05 14:03:31.632157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.077 [2024-12-05 14:03:31.632172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.077 qpair failed and we were unable to recover it. 00:31:49.077 [2024-12-05 14:03:31.642059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.077 [2024-12-05 14:03:31.642118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.077 [2024-12-05 14:03:31.642132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.077 [2024-12-05 14:03:31.642139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.077 [2024-12-05 14:03:31.642145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.077 [2024-12-05 14:03:31.642159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.077 qpair failed and we were unable to recover it. 00:31:49.077 [2024-12-05 14:03:31.652108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.077 [2024-12-05 14:03:31.652167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.077 [2024-12-05 14:03:31.652181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.077 [2024-12-05 14:03:31.652187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.077 [2024-12-05 14:03:31.652193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.077 [2024-12-05 14:03:31.652207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.077 qpair failed and we were unable to recover it. 00:31:49.337 [2024-12-05 14:03:31.662116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.337 [2024-12-05 14:03:31.662169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.337 [2024-12-05 14:03:31.662182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.337 [2024-12-05 14:03:31.662189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.337 [2024-12-05 14:03:31.662195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.337 [2024-12-05 14:03:31.662208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.337 qpair failed and we were unable to recover it. 00:31:49.337 [2024-12-05 14:03:31.672211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.337 [2024-12-05 14:03:31.672292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.337 [2024-12-05 14:03:31.672305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.337 [2024-12-05 14:03:31.672312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.337 [2024-12-05 14:03:31.672317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.337 [2024-12-05 14:03:31.672331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.337 qpair failed and we were unable to recover it. 00:31:49.337 [2024-12-05 14:03:31.682109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.337 [2024-12-05 14:03:31.682165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.337 [2024-12-05 14:03:31.682183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.337 [2024-12-05 14:03:31.682190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.337 [2024-12-05 14:03:31.682195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.337 [2024-12-05 14:03:31.682210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.337 qpair failed and we were unable to recover it. 00:31:49.337 [2024-12-05 14:03:31.692216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.337 [2024-12-05 14:03:31.692292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.337 [2024-12-05 14:03:31.692305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.337 [2024-12-05 14:03:31.692312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.337 [2024-12-05 14:03:31.692318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.337 [2024-12-05 14:03:31.692332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.337 qpair failed and we were unable to recover it. 00:31:49.337 [2024-12-05 14:03:31.702224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.337 [2024-12-05 14:03:31.702287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.337 [2024-12-05 14:03:31.702300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.337 [2024-12-05 14:03:31.702307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.337 [2024-12-05 14:03:31.702313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.337 [2024-12-05 14:03:31.702326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.337 qpair failed and we were unable to recover it. 00:31:49.337 [2024-12-05 14:03:31.712246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.337 [2024-12-05 14:03:31.712296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.337 [2024-12-05 14:03:31.712310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.337 [2024-12-05 14:03:31.712316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.337 [2024-12-05 14:03:31.712322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.337 [2024-12-05 14:03:31.712336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.337 qpair failed and we were unable to recover it. 00:31:49.337 [2024-12-05 14:03:31.722304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.337 [2024-12-05 14:03:31.722404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.337 [2024-12-05 14:03:31.722417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.337 [2024-12-05 14:03:31.722424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.337 [2024-12-05 14:03:31.722433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.337 [2024-12-05 14:03:31.722447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.337 qpair failed and we were unable to recover it. 00:31:49.337 [2024-12-05 14:03:31.732311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.337 [2024-12-05 14:03:31.732371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.337 [2024-12-05 14:03:31.732385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.337 [2024-12-05 14:03:31.732392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.337 [2024-12-05 14:03:31.732397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.337 [2024-12-05 14:03:31.732412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.337 qpair failed and we were unable to recover it. 00:31:49.337 [2024-12-05 14:03:31.742350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.337 [2024-12-05 14:03:31.742417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.337 [2024-12-05 14:03:31.742430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.337 [2024-12-05 14:03:31.742437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.337 [2024-12-05 14:03:31.742442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.337 [2024-12-05 14:03:31.742456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.337 qpair failed and we were unable to recover it. 00:31:49.337 [2024-12-05 14:03:31.752331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.337 [2024-12-05 14:03:31.752383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.337 [2024-12-05 14:03:31.752397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.337 [2024-12-05 14:03:31.752403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.337 [2024-12-05 14:03:31.752409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.337 [2024-12-05 14:03:31.752424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.337 qpair failed and we were unable to recover it. 00:31:49.337 [2024-12-05 14:03:31.762426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.337 [2024-12-05 14:03:31.762478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.337 [2024-12-05 14:03:31.762491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.337 [2024-12-05 14:03:31.762498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.337 [2024-12-05 14:03:31.762504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.337 [2024-12-05 14:03:31.762518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.337 qpair failed and we were unable to recover it. 00:31:49.337 [2024-12-05 14:03:31.772402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.337 [2024-12-05 14:03:31.772474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.338 [2024-12-05 14:03:31.772488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.338 [2024-12-05 14:03:31.772495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.338 [2024-12-05 14:03:31.772501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.338 [2024-12-05 14:03:31.772515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.338 qpair failed and we were unable to recover it. 00:31:49.338 [2024-12-05 14:03:31.782482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.338 [2024-12-05 14:03:31.782548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.338 [2024-12-05 14:03:31.782562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.338 [2024-12-05 14:03:31.782568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.338 [2024-12-05 14:03:31.782574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.338 [2024-12-05 14:03:31.782588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.338 qpair failed and we were unable to recover it. 00:31:49.338 [2024-12-05 14:03:31.792481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.338 [2024-12-05 14:03:31.792531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.338 [2024-12-05 14:03:31.792544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.338 [2024-12-05 14:03:31.792551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.338 [2024-12-05 14:03:31.792557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.338 [2024-12-05 14:03:31.792571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.338 qpair failed and we were unable to recover it. 00:31:49.338 [2024-12-05 14:03:31.802516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.338 [2024-12-05 14:03:31.802578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.338 [2024-12-05 14:03:31.802592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.338 [2024-12-05 14:03:31.802599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.338 [2024-12-05 14:03:31.802604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.338 [2024-12-05 14:03:31.802619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.338 qpair failed and we were unable to recover it. 00:31:49.338 [2024-12-05 14:03:31.812551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.338 [2024-12-05 14:03:31.812626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.338 [2024-12-05 14:03:31.812642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.338 [2024-12-05 14:03:31.812649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.338 [2024-12-05 14:03:31.812654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.338 [2024-12-05 14:03:31.812669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.338 qpair failed and we were unable to recover it. 00:31:49.338 [2024-12-05 14:03:31.822515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.338 [2024-12-05 14:03:31.822570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.338 [2024-12-05 14:03:31.822584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.338 [2024-12-05 14:03:31.822590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.338 [2024-12-05 14:03:31.822596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.338 [2024-12-05 14:03:31.822611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.338 qpair failed and we were unable to recover it. 00:31:49.338 [2024-12-05 14:03:31.832621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.338 [2024-12-05 14:03:31.832679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.338 [2024-12-05 14:03:31.832693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.338 [2024-12-05 14:03:31.832700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.338 [2024-12-05 14:03:31.832705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.338 [2024-12-05 14:03:31.832720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.338 qpair failed and we were unable to recover it. 00:31:49.338 [2024-12-05 14:03:31.842692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.338 [2024-12-05 14:03:31.842795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.338 [2024-12-05 14:03:31.842809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.338 [2024-12-05 14:03:31.842815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.338 [2024-12-05 14:03:31.842821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.338 [2024-12-05 14:03:31.842835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.338 qpair failed and we were unable to recover it. 00:31:49.338 [2024-12-05 14:03:31.852582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.338 [2024-12-05 14:03:31.852636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.338 [2024-12-05 14:03:31.852649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.338 [2024-12-05 14:03:31.852655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.338 [2024-12-05 14:03:31.852664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.338 [2024-12-05 14:03:31.852678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.338 qpair failed and we were unable to recover it. 00:31:49.338 [2024-12-05 14:03:31.862702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.338 [2024-12-05 14:03:31.862782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.338 [2024-12-05 14:03:31.862795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.338 [2024-12-05 14:03:31.862802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.338 [2024-12-05 14:03:31.862809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.338 [2024-12-05 14:03:31.862823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.338 qpair failed and we were unable to recover it. 00:31:49.338 [2024-12-05 14:03:31.872702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.338 [2024-12-05 14:03:31.872758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.338 [2024-12-05 14:03:31.872772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.338 [2024-12-05 14:03:31.872778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.338 [2024-12-05 14:03:31.872784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.338 [2024-12-05 14:03:31.872799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.338 qpair failed and we were unable to recover it. 00:31:49.338 [2024-12-05 14:03:31.882763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.338 [2024-12-05 14:03:31.882820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.338 [2024-12-05 14:03:31.882834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.338 [2024-12-05 14:03:31.882842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.339 [2024-12-05 14:03:31.882848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.339 [2024-12-05 14:03:31.882862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.339 qpair failed and we were unable to recover it. 00:31:49.339 [2024-12-05 14:03:31.892795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.339 [2024-12-05 14:03:31.892873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.339 [2024-12-05 14:03:31.892886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.339 [2024-12-05 14:03:31.892893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.339 [2024-12-05 14:03:31.892899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.339 [2024-12-05 14:03:31.892912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.339 qpair failed and we were unable to recover it. 00:31:49.339 [2024-12-05 14:03:31.902805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.339 [2024-12-05 14:03:31.902857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.339 [2024-12-05 14:03:31.902871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.339 [2024-12-05 14:03:31.902877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.339 [2024-12-05 14:03:31.902883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.339 [2024-12-05 14:03:31.902897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.339 qpair failed and we were unable to recover it. 00:31:49.339 [2024-12-05 14:03:31.912834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.339 [2024-12-05 14:03:31.912889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.339 [2024-12-05 14:03:31.912903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.339 [2024-12-05 14:03:31.912909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.339 [2024-12-05 14:03:31.912915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.339 [2024-12-05 14:03:31.912929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.339 qpair failed and we were unable to recover it. 00:31:49.599 [2024-12-05 14:03:31.922858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.599 [2024-12-05 14:03:31.922910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.599 [2024-12-05 14:03:31.922924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.599 [2024-12-05 14:03:31.922931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.599 [2024-12-05 14:03:31.922937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.599 [2024-12-05 14:03:31.922951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.599 qpair failed and we were unable to recover it. 00:31:49.599 [2024-12-05 14:03:31.932890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.599 [2024-12-05 14:03:31.932962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.599 [2024-12-05 14:03:31.932975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.599 [2024-12-05 14:03:31.932982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.599 [2024-12-05 14:03:31.932987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.599 [2024-12-05 14:03:31.933001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.599 qpair failed and we were unable to recover it. 00:31:49.599 [2024-12-05 14:03:31.942908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.599 [2024-12-05 14:03:31.942960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.599 [2024-12-05 14:03:31.942977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.599 [2024-12-05 14:03:31.942984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.599 [2024-12-05 14:03:31.942990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.599 [2024-12-05 14:03:31.943003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.599 qpair failed and we were unable to recover it. 00:31:49.599 [2024-12-05 14:03:31.952936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.599 [2024-12-05 14:03:31.952988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.599 [2024-12-05 14:03:31.953002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.599 [2024-12-05 14:03:31.953008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.599 [2024-12-05 14:03:31.953014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.599 [2024-12-05 14:03:31.953028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.599 qpair failed and we were unable to recover it. 00:31:49.599 [2024-12-05 14:03:31.962978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.599 [2024-12-05 14:03:31.963061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.599 [2024-12-05 14:03:31.963075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.599 [2024-12-05 14:03:31.963081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.599 [2024-12-05 14:03:31.963087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.599 [2024-12-05 14:03:31.963101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.599 qpair failed and we were unable to recover it. 00:31:49.599 [2024-12-05 14:03:31.972991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.599 [2024-12-05 14:03:31.973049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.599 [2024-12-05 14:03:31.973063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.599 [2024-12-05 14:03:31.973070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.599 [2024-12-05 14:03:31.973076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.599 [2024-12-05 14:03:31.973089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.599 qpair failed and we were unable to recover it. 00:31:49.599 [2024-12-05 14:03:31.983027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.599 [2024-12-05 14:03:31.983123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.599 [2024-12-05 14:03:31.983137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.599 [2024-12-05 14:03:31.983143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.599 [2024-12-05 14:03:31.983152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.599 [2024-12-05 14:03:31.983166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.599 qpair failed and we were unable to recover it. 00:31:49.599 [2024-12-05 14:03:31.993026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.599 [2024-12-05 14:03:31.993081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.599 [2024-12-05 14:03:31.993095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.599 [2024-12-05 14:03:31.993102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.599 [2024-12-05 14:03:31.993109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.599 [2024-12-05 14:03:31.993123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.599 qpair failed and we were unable to recover it. 00:31:49.599 [2024-12-05 14:03:32.003050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.599 [2024-12-05 14:03:32.003137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.599 [2024-12-05 14:03:32.003152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.599 [2024-12-05 14:03:32.003159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.599 [2024-12-05 14:03:32.003165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.599 [2024-12-05 14:03:32.003179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.599 qpair failed and we were unable to recover it. 00:31:49.599 [2024-12-05 14:03:32.013099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.599 [2024-12-05 14:03:32.013152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.599 [2024-12-05 14:03:32.013166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.013172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.013178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.013193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.023117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.023199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.023213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.023220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.023226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.023240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.033168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.033217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.033231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.033237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.033243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.033257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.043203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.043283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.043297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.043304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.043309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.043324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.053198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.053285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.053311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.053318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.053324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.053339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.063180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.063267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.063282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.063289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.063295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.063310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.073229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.073314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.073331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.073337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.073343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.073356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.083329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.083391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.083406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.083413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.083419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.083433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.093322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.093382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.093397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.093404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.093410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.093425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.103348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.103404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.103418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.103425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.103431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.103445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.113297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.113354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.113372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.113379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.113390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.113405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.123388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.123464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.123477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.123484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.123490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.123505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.133372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.133424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.133438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.133444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.133450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.133465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.143436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.143519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.143533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.143540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.143546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.143560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.153535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.153593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.153605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.153612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.153618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.153632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.163491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.163546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.163560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.163566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.163572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.163586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.173479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.173543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.173557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.173563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.173569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.173584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.600 [2024-12-05 14:03:32.183577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.600 [2024-12-05 14:03:32.183676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.600 [2024-12-05 14:03:32.183689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.600 [2024-12-05 14:03:32.183695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.600 [2024-12-05 14:03:32.183701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.600 [2024-12-05 14:03:32.183715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.600 qpair failed and we were unable to recover it. 00:31:49.860 [2024-12-05 14:03:32.193555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.860 [2024-12-05 14:03:32.193611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.860 [2024-12-05 14:03:32.193624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.860 [2024-12-05 14:03:32.193631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.860 [2024-12-05 14:03:32.193637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.860 [2024-12-05 14:03:32.193650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.860 qpair failed and we were unable to recover it. 00:31:49.860 [2024-12-05 14:03:32.203568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.860 [2024-12-05 14:03:32.203621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.860 [2024-12-05 14:03:32.203639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.860 [2024-12-05 14:03:32.203646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.860 [2024-12-05 14:03:32.203652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.860 [2024-12-05 14:03:32.203666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.860 qpair failed and we were unable to recover it. 00:31:49.860 [2024-12-05 14:03:32.213653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.213734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.861 [2024-12-05 14:03:32.213748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.861 [2024-12-05 14:03:32.213755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.861 [2024-12-05 14:03:32.213761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.861 [2024-12-05 14:03:32.213775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.861 qpair failed and we were unable to recover it. 00:31:49.861 [2024-12-05 14:03:32.223621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.223676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.861 [2024-12-05 14:03:32.223690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.861 [2024-12-05 14:03:32.223696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.861 [2024-12-05 14:03:32.223702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.861 [2024-12-05 14:03:32.223716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.861 qpair failed and we were unable to recover it. 00:31:49.861 [2024-12-05 14:03:32.233686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.233783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.861 [2024-12-05 14:03:32.233796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.861 [2024-12-05 14:03:32.233803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.861 [2024-12-05 14:03:32.233809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.861 [2024-12-05 14:03:32.233822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.861 qpair failed and we were unable to recover it. 00:31:49.861 [2024-12-05 14:03:32.243781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.243852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.861 [2024-12-05 14:03:32.243865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.861 [2024-12-05 14:03:32.243871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.861 [2024-12-05 14:03:32.243881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.861 [2024-12-05 14:03:32.243895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.861 qpair failed and we were unable to recover it. 00:31:49.861 [2024-12-05 14:03:32.253786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.253844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.861 [2024-12-05 14:03:32.253858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.861 [2024-12-05 14:03:32.253865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.861 [2024-12-05 14:03:32.253870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.861 [2024-12-05 14:03:32.253885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.861 qpair failed and we were unable to recover it. 00:31:49.861 [2024-12-05 14:03:32.263733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.263785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.861 [2024-12-05 14:03:32.263799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.861 [2024-12-05 14:03:32.263805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.861 [2024-12-05 14:03:32.263811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.861 [2024-12-05 14:03:32.263826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.861 qpair failed and we were unable to recover it. 00:31:49.861 [2024-12-05 14:03:32.273819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.273869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.861 [2024-12-05 14:03:32.273882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.861 [2024-12-05 14:03:32.273888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.861 [2024-12-05 14:03:32.273894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.861 [2024-12-05 14:03:32.273908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.861 qpair failed and we were unable to recover it. 00:31:49.861 [2024-12-05 14:03:32.283839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.283890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.861 [2024-12-05 14:03:32.283903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.861 [2024-12-05 14:03:32.283909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.861 [2024-12-05 14:03:32.283915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.861 [2024-12-05 14:03:32.283929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.861 qpair failed and we were unable to recover it. 00:31:49.861 [2024-12-05 14:03:32.293882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.293936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.861 [2024-12-05 14:03:32.293950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.861 [2024-12-05 14:03:32.293956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.861 [2024-12-05 14:03:32.293962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.861 [2024-12-05 14:03:32.293976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.861 qpair failed and we were unable to recover it. 00:31:49.861 [2024-12-05 14:03:32.303942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.303994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.861 [2024-12-05 14:03:32.304008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.861 [2024-12-05 14:03:32.304015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.861 [2024-12-05 14:03:32.304021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.861 [2024-12-05 14:03:32.304035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.861 qpair failed and we were unable to recover it. 00:31:49.861 [2024-12-05 14:03:32.313937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.313994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.861 [2024-12-05 14:03:32.314009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.861 [2024-12-05 14:03:32.314015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.861 [2024-12-05 14:03:32.314021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.861 [2024-12-05 14:03:32.314036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.861 qpair failed and we were unable to recover it. 00:31:49.861 [2024-12-05 14:03:32.323922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.323977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.861 [2024-12-05 14:03:32.323992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.861 [2024-12-05 14:03:32.323999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.861 [2024-12-05 14:03:32.324005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.861 [2024-12-05 14:03:32.324019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.861 qpair failed and we were unable to recover it. 00:31:49.861 [2024-12-05 14:03:32.334013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.334077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.861 [2024-12-05 14:03:32.334094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.861 [2024-12-05 14:03:32.334100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.861 [2024-12-05 14:03:32.334106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.861 [2024-12-05 14:03:32.334120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.861 qpair failed and we were unable to recover it. 00:31:49.861 [2024-12-05 14:03:32.343980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.861 [2024-12-05 14:03:32.344035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.862 [2024-12-05 14:03:32.344048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.862 [2024-12-05 14:03:32.344055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.862 [2024-12-05 14:03:32.344061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.862 [2024-12-05 14:03:32.344074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.862 qpair failed and we were unable to recover it. 00:31:49.862 [2024-12-05 14:03:32.354004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.862 [2024-12-05 14:03:32.354098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.862 [2024-12-05 14:03:32.354111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.862 [2024-12-05 14:03:32.354117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.862 [2024-12-05 14:03:32.354123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.862 [2024-12-05 14:03:32.354137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.862 qpair failed and we were unable to recover it. 00:31:49.862 [2024-12-05 14:03:32.364116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.862 [2024-12-05 14:03:32.364198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.862 [2024-12-05 14:03:32.364211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.862 [2024-12-05 14:03:32.364218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.862 [2024-12-05 14:03:32.364223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.862 [2024-12-05 14:03:32.364238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.862 qpair failed and we were unable to recover it. 00:31:49.862 [2024-12-05 14:03:32.374157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.862 [2024-12-05 14:03:32.374207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.862 [2024-12-05 14:03:32.374220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.862 [2024-12-05 14:03:32.374226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.862 [2024-12-05 14:03:32.374236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.862 [2024-12-05 14:03:32.374250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.862 qpair failed and we were unable to recover it. 00:31:49.862 [2024-12-05 14:03:32.384155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.862 [2024-12-05 14:03:32.384208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.862 [2024-12-05 14:03:32.384224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.862 [2024-12-05 14:03:32.384231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.862 [2024-12-05 14:03:32.384236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.862 [2024-12-05 14:03:32.384252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.862 qpair failed and we were unable to recover it. 00:31:49.862 [2024-12-05 14:03:32.394100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.862 [2024-12-05 14:03:32.394155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.862 [2024-12-05 14:03:32.394169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.862 [2024-12-05 14:03:32.394175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.862 [2024-12-05 14:03:32.394181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.862 [2024-12-05 14:03:32.394195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.862 qpair failed and we were unable to recover it. 00:31:49.862 [2024-12-05 14:03:32.404167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.862 [2024-12-05 14:03:32.404265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.862 [2024-12-05 14:03:32.404280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.862 [2024-12-05 14:03:32.404287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.862 [2024-12-05 14:03:32.404293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.862 [2024-12-05 14:03:32.404307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.862 qpair failed and we were unable to recover it. 00:31:49.862 [2024-12-05 14:03:32.414170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.862 [2024-12-05 14:03:32.414278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.862 [2024-12-05 14:03:32.414293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.862 [2024-12-05 14:03:32.414299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.862 [2024-12-05 14:03:32.414305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.862 [2024-12-05 14:03:32.414319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.862 qpair failed and we were unable to recover it. 00:31:49.862 [2024-12-05 14:03:32.424197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.862 [2024-12-05 14:03:32.424293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.862 [2024-12-05 14:03:32.424307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.862 [2024-12-05 14:03:32.424313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.862 [2024-12-05 14:03:32.424319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.862 [2024-12-05 14:03:32.424333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.862 qpair failed and we were unable to recover it. 00:31:49.862 [2024-12-05 14:03:32.434257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.862 [2024-12-05 14:03:32.434347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.862 [2024-12-05 14:03:32.434361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.862 [2024-12-05 14:03:32.434371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.862 [2024-12-05 14:03:32.434377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.862 [2024-12-05 14:03:32.434391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.862 qpair failed and we were unable to recover it. 00:31:49.862 [2024-12-05 14:03:32.444323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:49.862 [2024-12-05 14:03:32.444411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:49.862 [2024-12-05 14:03:32.444427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:49.862 [2024-12-05 14:03:32.444434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:49.862 [2024-12-05 14:03:32.444440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:49.862 [2024-12-05 14:03:32.444455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:49.862 qpair failed and we were unable to recover it. 00:31:50.123 [2024-12-05 14:03:32.454283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.123 [2024-12-05 14:03:32.454344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.123 [2024-12-05 14:03:32.454359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.123 [2024-12-05 14:03:32.454365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.123 [2024-12-05 14:03:32.454375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.123 [2024-12-05 14:03:32.454390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.123 qpair failed and we were unable to recover it. 00:31:50.123 [2024-12-05 14:03:32.464375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.123 [2024-12-05 14:03:32.464430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.123 [2024-12-05 14:03:32.464446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.123 [2024-12-05 14:03:32.464453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.123 [2024-12-05 14:03:32.464459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.123 [2024-12-05 14:03:32.464474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.123 qpair failed and we were unable to recover it. 00:31:50.123 [2024-12-05 14:03:32.474462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.123 [2024-12-05 14:03:32.474517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.123 [2024-12-05 14:03:32.474531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.123 [2024-12-05 14:03:32.474537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.123 [2024-12-05 14:03:32.474543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.123 [2024-12-05 14:03:32.474557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.123 qpair failed and we were unable to recover it. 00:31:50.123 [2024-12-05 14:03:32.484449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.123 [2024-12-05 14:03:32.484502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.123 [2024-12-05 14:03:32.484516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.123 [2024-12-05 14:03:32.484523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.123 [2024-12-05 14:03:32.484529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.123 [2024-12-05 14:03:32.484543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.123 qpair failed and we were unable to recover it. 00:31:50.123 [2024-12-05 14:03:32.494466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.123 [2024-12-05 14:03:32.494526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.123 [2024-12-05 14:03:32.494539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.123 [2024-12-05 14:03:32.494545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.123 [2024-12-05 14:03:32.494551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.123 [2024-12-05 14:03:32.494565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.123 qpair failed and we were unable to recover it. 00:31:50.123 [2024-12-05 14:03:32.504536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.123 [2024-12-05 14:03:32.504602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.123 [2024-12-05 14:03:32.504615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.123 [2024-12-05 14:03:32.504622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.123 [2024-12-05 14:03:32.504630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.123 [2024-12-05 14:03:32.504644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.123 qpair failed and we were unable to recover it. 00:31:50.123 [2024-12-05 14:03:32.514531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.123 [2024-12-05 14:03:32.514580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.123 [2024-12-05 14:03:32.514595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.123 [2024-12-05 14:03:32.514601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.123 [2024-12-05 14:03:32.514607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.123 [2024-12-05 14:03:32.514622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.123 qpair failed and we were unable to recover it. 00:31:50.123 [2024-12-05 14:03:32.524583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.123 [2024-12-05 14:03:32.524638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.123 [2024-12-05 14:03:32.524652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.123 [2024-12-05 14:03:32.524659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.123 [2024-12-05 14:03:32.524665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.123 [2024-12-05 14:03:32.524679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.123 qpair failed and we were unable to recover it. 00:31:50.123 [2024-12-05 14:03:32.534586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.123 [2024-12-05 14:03:32.534657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.123 [2024-12-05 14:03:32.534670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.123 [2024-12-05 14:03:32.534677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.123 [2024-12-05 14:03:32.534682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.123 [2024-12-05 14:03:32.534696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.123 qpair failed and we were unable to recover it. 00:31:50.123 [2024-12-05 14:03:32.544646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.123 [2024-12-05 14:03:32.544709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.123 [2024-12-05 14:03:32.544723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.123 [2024-12-05 14:03:32.544729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.123 [2024-12-05 14:03:32.544735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.123 [2024-12-05 14:03:32.544750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.123 qpair failed and we were unable to recover it. 00:31:50.123 [2024-12-05 14:03:32.554690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.123 [2024-12-05 14:03:32.554748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.123 [2024-12-05 14:03:32.554762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.123 [2024-12-05 14:03:32.554769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.123 [2024-12-05 14:03:32.554775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.123 [2024-12-05 14:03:32.554789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.123 qpair failed and we were unable to recover it. 00:31:50.123 [2024-12-05 14:03:32.564733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.564790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.564804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.564810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.564816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.564830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.574707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.574770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.574784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.574790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.574797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.574814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.584720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.584775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.584788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.584794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.584800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.584813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.594827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.594913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.594929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.594935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.594941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.594955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.604794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.604868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.604882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.604889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.604894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.604909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.614809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.614889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.614903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.614909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.614915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.614929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.624835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.624888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.624902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.624908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.624914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.624928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.634852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.634901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.634915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.634921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.634930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.634943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.644886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.644942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.644955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.644961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.644967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.644981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.654914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.654990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.655003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.655009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.655015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.655029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.665007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.665113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.665127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.665133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.665138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.665152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.674976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.675067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.675081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.675087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.675093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.675107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.684983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.685038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.685052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.685058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.685064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.685078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.695024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.695075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.695088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.695095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.695101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.695115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.124 [2024-12-05 14:03:32.705048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.124 [2024-12-05 14:03:32.705127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.124 [2024-12-05 14:03:32.705141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.124 [2024-12-05 14:03:32.705148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.124 [2024-12-05 14:03:32.705154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.124 [2024-12-05 14:03:32.705168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.124 qpair failed and we were unable to recover it. 00:31:50.385 [2024-12-05 14:03:32.715068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.385 [2024-12-05 14:03:32.715117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.385 [2024-12-05 14:03:32.715132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.386 [2024-12-05 14:03:32.715139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.386 [2024-12-05 14:03:32.715145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.386 [2024-12-05 14:03:32.715159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.386 qpair failed and we were unable to recover it. 00:31:50.386 [2024-12-05 14:03:32.725165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.386 [2024-12-05 14:03:32.725233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.386 [2024-12-05 14:03:32.725250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.386 [2024-12-05 14:03:32.725257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.386 [2024-12-05 14:03:32.725263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.386 [2024-12-05 14:03:32.725278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.386 qpair failed and we were unable to recover it. 00:31:50.386 [2024-12-05 14:03:32.735193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.386 [2024-12-05 14:03:32.735250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.386 [2024-12-05 14:03:32.735264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.386 [2024-12-05 14:03:32.735270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.386 [2024-12-05 14:03:32.735276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.386 [2024-12-05 14:03:32.735290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.386 qpair failed and we were unable to recover it. 00:31:50.386 [2024-12-05 14:03:32.745157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.386 [2024-12-05 14:03:32.745215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.386 [2024-12-05 14:03:32.745229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.386 [2024-12-05 14:03:32.745236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.386 [2024-12-05 14:03:32.745242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.386 [2024-12-05 14:03:32.745255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.386 qpair failed and we were unable to recover it. 00:31:50.386 [2024-12-05 14:03:32.755179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.386 [2024-12-05 14:03:32.755264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.386 [2024-12-05 14:03:32.755278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.386 [2024-12-05 14:03:32.755284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.386 [2024-12-05 14:03:32.755290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.386 [2024-12-05 14:03:32.755304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.386 qpair failed and we were unable to recover it. 00:31:50.386 [2024-12-05 14:03:32.765261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.386 [2024-12-05 14:03:32.765319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.386 [2024-12-05 14:03:32.765333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.386 [2024-12-05 14:03:32.765339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.386 [2024-12-05 14:03:32.765350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.386 [2024-12-05 14:03:32.765364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.386 qpair failed and we were unable to recover it. 00:31:50.386 [2024-12-05 14:03:32.775251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.386 [2024-12-05 14:03:32.775308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.386 [2024-12-05 14:03:32.775322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.386 [2024-12-05 14:03:32.775329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.386 [2024-12-05 14:03:32.775334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.386 [2024-12-05 14:03:32.775349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.386 qpair failed and we were unable to recover it. 00:31:50.386 [2024-12-05 14:03:32.785320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.386 [2024-12-05 14:03:32.785383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.386 [2024-12-05 14:03:32.785398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.386 [2024-12-05 14:03:32.785404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.386 [2024-12-05 14:03:32.785410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.386 [2024-12-05 14:03:32.785425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.386 qpair failed and we were unable to recover it. 00:31:50.386 [2024-12-05 14:03:32.795302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.386 [2024-12-05 14:03:32.795362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.386 [2024-12-05 14:03:32.795379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.386 [2024-12-05 14:03:32.795385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.386 [2024-12-05 14:03:32.795391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.386 [2024-12-05 14:03:32.795404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.386 qpair failed and we were unable to recover it. 00:31:50.386 [2024-12-05 14:03:32.805340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.386 [2024-12-05 14:03:32.805399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.386 [2024-12-05 14:03:32.805415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.386 [2024-12-05 14:03:32.805421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.386 [2024-12-05 14:03:32.805427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.386 [2024-12-05 14:03:32.805442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.386 qpair failed and we were unable to recover it. 00:31:50.386 [2024-12-05 14:03:32.815362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.386 [2024-12-05 14:03:32.815445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.386 [2024-12-05 14:03:32.815459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.386 [2024-12-05 14:03:32.815466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.386 [2024-12-05 14:03:32.815472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.386 [2024-12-05 14:03:32.815486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.386 qpair failed and we were unable to recover it. 00:31:50.386 [2024-12-05 14:03:32.825429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.386 [2024-12-05 14:03:32.825485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.386 [2024-12-05 14:03:32.825500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.386 [2024-12-05 14:03:32.825506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.386 [2024-12-05 14:03:32.825512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.386 [2024-12-05 14:03:32.825526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.387 qpair failed and we were unable to recover it. 00:31:50.387 [2024-12-05 14:03:32.835409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.387 [2024-12-05 14:03:32.835483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.387 [2024-12-05 14:03:32.835497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.387 [2024-12-05 14:03:32.835504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.387 [2024-12-05 14:03:32.835509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.387 [2024-12-05 14:03:32.835523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.387 qpair failed and we were unable to recover it. 00:31:50.387 [2024-12-05 14:03:32.845499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.387 [2024-12-05 14:03:32.845570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.387 [2024-12-05 14:03:32.845583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.387 [2024-12-05 14:03:32.845589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.387 [2024-12-05 14:03:32.845596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.387 [2024-12-05 14:03:32.845610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.387 qpair failed and we were unable to recover it. 00:31:50.387 [2024-12-05 14:03:32.855497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.387 [2024-12-05 14:03:32.855561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.387 [2024-12-05 14:03:32.855578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.387 [2024-12-05 14:03:32.855585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.387 [2024-12-05 14:03:32.855591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.387 [2024-12-05 14:03:32.855605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.387 qpair failed and we were unable to recover it. 00:31:50.387 [2024-12-05 14:03:32.865535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.387 [2024-12-05 14:03:32.865588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.387 [2024-12-05 14:03:32.865601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.387 [2024-12-05 14:03:32.865608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.387 [2024-12-05 14:03:32.865614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.387 [2024-12-05 14:03:32.865628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.387 qpair failed and we were unable to recover it. 00:31:50.387 [2024-12-05 14:03:32.875524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.387 [2024-12-05 14:03:32.875623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.387 [2024-12-05 14:03:32.875636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.387 [2024-12-05 14:03:32.875642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.387 [2024-12-05 14:03:32.875648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.387 [2024-12-05 14:03:32.875662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.387 qpair failed and we were unable to recover it. 00:31:50.387 [2024-12-05 14:03:32.885551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.387 [2024-12-05 14:03:32.885606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.387 [2024-12-05 14:03:32.885620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.387 [2024-12-05 14:03:32.885627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.387 [2024-12-05 14:03:32.885632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.387 [2024-12-05 14:03:32.885646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.387 qpair failed and we were unable to recover it. 00:31:50.387 [2024-12-05 14:03:32.895586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.387 [2024-12-05 14:03:32.895650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.387 [2024-12-05 14:03:32.895664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.387 [2024-12-05 14:03:32.895671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.387 [2024-12-05 14:03:32.895679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.387 [2024-12-05 14:03:32.895694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.387 qpair failed and we were unable to recover it. 00:31:50.387 [2024-12-05 14:03:32.905551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.387 [2024-12-05 14:03:32.905640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.387 [2024-12-05 14:03:32.905653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.387 [2024-12-05 14:03:32.905659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.387 [2024-12-05 14:03:32.905665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.387 [2024-12-05 14:03:32.905679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.387 qpair failed and we were unable to recover it. 00:31:50.387 [2024-12-05 14:03:32.915660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.387 [2024-12-05 14:03:32.915716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.387 [2024-12-05 14:03:32.915730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.387 [2024-12-05 14:03:32.915736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.387 [2024-12-05 14:03:32.915742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.387 [2024-12-05 14:03:32.915756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.387 qpair failed and we were unable to recover it. 00:31:50.387 [2024-12-05 14:03:32.925706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.387 [2024-12-05 14:03:32.925764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.387 [2024-12-05 14:03:32.925779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.387 [2024-12-05 14:03:32.925786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.387 [2024-12-05 14:03:32.925792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.387 [2024-12-05 14:03:32.925806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.387 qpair failed and we were unable to recover it. 00:31:50.387 [2024-12-05 14:03:32.935723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.387 [2024-12-05 14:03:32.935777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.387 [2024-12-05 14:03:32.935791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.387 [2024-12-05 14:03:32.935798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.387 [2024-12-05 14:03:32.935804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.387 [2024-12-05 14:03:32.935818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.387 qpair failed and we were unable to recover it. 00:31:50.387 [2024-12-05 14:03:32.945719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.387 [2024-12-05 14:03:32.945770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.387 [2024-12-05 14:03:32.945784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.387 [2024-12-05 14:03:32.945790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.387 [2024-12-05 14:03:32.945796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.387 [2024-12-05 14:03:32.945811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.387 qpair failed and we were unable to recover it. 00:31:50.387 [2024-12-05 14:03:32.955765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.387 [2024-12-05 14:03:32.955848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.387 [2024-12-05 14:03:32.955861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.387 [2024-12-05 14:03:32.955868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.387 [2024-12-05 14:03:32.955874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.388 [2024-12-05 14:03:32.955888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.388 qpair failed and we were unable to recover it. 00:31:50.388 [2024-12-05 14:03:32.965788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.388 [2024-12-05 14:03:32.965841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.388 [2024-12-05 14:03:32.965855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.388 [2024-12-05 14:03:32.965862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.388 [2024-12-05 14:03:32.965868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.388 [2024-12-05 14:03:32.965883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.388 qpair failed and we were unable to recover it. 00:31:50.649 [2024-12-05 14:03:32.975842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.649 [2024-12-05 14:03:32.975891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.649 [2024-12-05 14:03:32.975904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.649 [2024-12-05 14:03:32.975911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.649 [2024-12-05 14:03:32.975917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.649 [2024-12-05 14:03:32.975931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.649 qpair failed and we were unable to recover it. 00:31:50.649 [2024-12-05 14:03:32.985762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.649 [2024-12-05 14:03:32.985814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.649 [2024-12-05 14:03:32.985830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.649 [2024-12-05 14:03:32.985836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.649 [2024-12-05 14:03:32.985842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.649 [2024-12-05 14:03:32.985856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.649 qpair failed and we were unable to recover it. 00:31:50.649 [2024-12-05 14:03:32.995939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.649 [2024-12-05 14:03:32.995992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.649 [2024-12-05 14:03:32.996006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.649 [2024-12-05 14:03:32.996012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.649 [2024-12-05 14:03:32.996018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.649 [2024-12-05 14:03:32.996032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.649 qpair failed and we were unable to recover it. 00:31:50.649 [2024-12-05 14:03:33.005961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.649 [2024-12-05 14:03:33.006017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.649 [2024-12-05 14:03:33.006031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.649 [2024-12-05 14:03:33.006037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.649 [2024-12-05 14:03:33.006043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.649 [2024-12-05 14:03:33.006057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.649 qpair failed and we were unable to recover it. 00:31:50.649 [2024-12-05 14:03:33.015977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.649 [2024-12-05 14:03:33.016041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.649 [2024-12-05 14:03:33.016056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.649 [2024-12-05 14:03:33.016062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.649 [2024-12-05 14:03:33.016068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.649 [2024-12-05 14:03:33.016082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.649 qpair failed and we were unable to recover it. 00:31:50.649 [2024-12-05 14:03:33.025957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.649 [2024-12-05 14:03:33.026008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.649 [2024-12-05 14:03:33.026021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.649 [2024-12-05 14:03:33.026028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.649 [2024-12-05 14:03:33.026036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.649 [2024-12-05 14:03:33.026050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.649 qpair failed and we were unable to recover it. 00:31:50.649 [2024-12-05 14:03:33.035991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.649 [2024-12-05 14:03:33.036051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.649 [2024-12-05 14:03:33.036065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.649 [2024-12-05 14:03:33.036071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.649 [2024-12-05 14:03:33.036077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.649 [2024-12-05 14:03:33.036091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.649 qpair failed and we were unable to recover it. 00:31:50.649 [2024-12-05 14:03:33.046029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.649 [2024-12-05 14:03:33.046087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.649 [2024-12-05 14:03:33.046101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.649 [2024-12-05 14:03:33.046108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.649 [2024-12-05 14:03:33.046114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.649 [2024-12-05 14:03:33.046128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.649 qpair failed and we were unable to recover it. 00:31:50.649 [2024-12-05 14:03:33.056083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.649 [2024-12-05 14:03:33.056136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.649 [2024-12-05 14:03:33.056150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.649 [2024-12-05 14:03:33.056156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.649 [2024-12-05 14:03:33.056162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.649 [2024-12-05 14:03:33.056177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.649 qpair failed and we were unable to recover it. 00:31:50.649 [2024-12-05 14:03:33.066106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.649 [2024-12-05 14:03:33.066168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.649 [2024-12-05 14:03:33.066182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.649 [2024-12-05 14:03:33.066189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.649 [2024-12-05 14:03:33.066195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.650 [2024-12-05 14:03:33.066209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.650 qpair failed and we were unable to recover it. 00:31:50.650 [2024-12-05 14:03:33.076138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.650 [2024-12-05 14:03:33.076192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.650 [2024-12-05 14:03:33.076206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.650 [2024-12-05 14:03:33.076213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.650 [2024-12-05 14:03:33.076219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.650 [2024-12-05 14:03:33.076234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.650 qpair failed and we were unable to recover it. 00:31:50.650 [2024-12-05 14:03:33.086177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.650 [2024-12-05 14:03:33.086233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.650 [2024-12-05 14:03:33.086247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.650 [2024-12-05 14:03:33.086253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.650 [2024-12-05 14:03:33.086259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.650 [2024-12-05 14:03:33.086273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.650 qpair failed and we were unable to recover it. 00:31:50.650 [2024-12-05 14:03:33.096191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.650 [2024-12-05 14:03:33.096246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.650 [2024-12-05 14:03:33.096259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.650 [2024-12-05 14:03:33.096266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.650 [2024-12-05 14:03:33.096272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.650 [2024-12-05 14:03:33.096286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.650 qpair failed and we were unable to recover it. 00:31:50.650 [2024-12-05 14:03:33.106252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.650 [2024-12-05 14:03:33.106304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.650 [2024-12-05 14:03:33.106319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.650 [2024-12-05 14:03:33.106326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.650 [2024-12-05 14:03:33.106331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.650 [2024-12-05 14:03:33.106346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.650 qpair failed and we were unable to recover it. 00:31:50.650 [2024-12-05 14:03:33.116218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.650 [2024-12-05 14:03:33.116269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.650 [2024-12-05 14:03:33.116286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.650 [2024-12-05 14:03:33.116293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.650 [2024-12-05 14:03:33.116299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.650 [2024-12-05 14:03:33.116313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.650 qpair failed and we were unable to recover it. 00:31:50.650 [2024-12-05 14:03:33.126287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.650 [2024-12-05 14:03:33.126353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.650 [2024-12-05 14:03:33.126371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.650 [2024-12-05 14:03:33.126377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.650 [2024-12-05 14:03:33.126383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.650 [2024-12-05 14:03:33.126397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.650 qpair failed and we were unable to recover it. 00:31:50.650 [2024-12-05 14:03:33.136288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.650 [2024-12-05 14:03:33.136342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.650 [2024-12-05 14:03:33.136357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.650 [2024-12-05 14:03:33.136364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.650 [2024-12-05 14:03:33.136375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.650 [2024-12-05 14:03:33.136390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.650 qpair failed and we were unable to recover it. 00:31:50.650 [2024-12-05 14:03:33.146310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.650 [2024-12-05 14:03:33.146364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.650 [2024-12-05 14:03:33.146382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.650 [2024-12-05 14:03:33.146389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.650 [2024-12-05 14:03:33.146395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.650 [2024-12-05 14:03:33.146409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.650 qpair failed and we were unable to recover it. 00:31:50.650 [2024-12-05 14:03:33.156349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.650 [2024-12-05 14:03:33.156431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.650 [2024-12-05 14:03:33.156444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.650 [2024-12-05 14:03:33.156451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.650 [2024-12-05 14:03:33.156460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.650 [2024-12-05 14:03:33.156474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.650 qpair failed and we were unable to recover it. 00:31:50.650 [2024-12-05 14:03:33.166395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.650 [2024-12-05 14:03:33.166447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.650 [2024-12-05 14:03:33.166460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.650 [2024-12-05 14:03:33.166467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.650 [2024-12-05 14:03:33.166473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.650 [2024-12-05 14:03:33.166487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.650 qpair failed and we were unable to recover it. 00:31:50.651 [2024-12-05 14:03:33.176420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.651 [2024-12-05 14:03:33.176478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.651 [2024-12-05 14:03:33.176498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.651 [2024-12-05 14:03:33.176504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.651 [2024-12-05 14:03:33.176510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.651 [2024-12-05 14:03:33.176525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.651 qpair failed and we were unable to recover it. 00:31:50.651 [2024-12-05 14:03:33.186365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.651 [2024-12-05 14:03:33.186427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.651 [2024-12-05 14:03:33.186441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.651 [2024-12-05 14:03:33.186447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.651 [2024-12-05 14:03:33.186453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.651 [2024-12-05 14:03:33.186468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.651 qpair failed and we were unable to recover it. 00:31:50.651 [2024-12-05 14:03:33.196464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.651 [2024-12-05 14:03:33.196517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.651 [2024-12-05 14:03:33.196531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.651 [2024-12-05 14:03:33.196538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.651 [2024-12-05 14:03:33.196544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.651 [2024-12-05 14:03:33.196558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.651 qpair failed and we were unable to recover it. 00:31:50.651 [2024-12-05 14:03:33.206507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.651 [2024-12-05 14:03:33.206562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.651 [2024-12-05 14:03:33.206576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.651 [2024-12-05 14:03:33.206583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.651 [2024-12-05 14:03:33.206589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.651 [2024-12-05 14:03:33.206603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.651 qpair failed and we were unable to recover it. 00:31:50.651 [2024-12-05 14:03:33.216535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.651 [2024-12-05 14:03:33.216590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.651 [2024-12-05 14:03:33.216604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.651 [2024-12-05 14:03:33.216611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.651 [2024-12-05 14:03:33.216616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.651 [2024-12-05 14:03:33.216631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.651 qpair failed and we were unable to recover it. 00:31:50.651 [2024-12-05 14:03:33.226598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.651 [2024-12-05 14:03:33.226658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.651 [2024-12-05 14:03:33.226673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.651 [2024-12-05 14:03:33.226680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.651 [2024-12-05 14:03:33.226686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.651 [2024-12-05 14:03:33.226700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.651 qpair failed and we were unable to recover it. 00:31:50.911 [2024-12-05 14:03:33.236588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.911 [2024-12-05 14:03:33.236641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.911 [2024-12-05 14:03:33.236654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.911 [2024-12-05 14:03:33.236661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.911 [2024-12-05 14:03:33.236666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.911 [2024-12-05 14:03:33.236680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.911 qpair failed and we were unable to recover it. 00:31:50.911 [2024-12-05 14:03:33.246615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.911 [2024-12-05 14:03:33.246670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.911 [2024-12-05 14:03:33.246686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.911 [2024-12-05 14:03:33.246693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.911 [2024-12-05 14:03:33.246699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.912 [2024-12-05 14:03:33.246713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.912 qpair failed and we were unable to recover it. 00:31:50.912 [2024-12-05 14:03:33.256667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.912 [2024-12-05 14:03:33.256741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.912 [2024-12-05 14:03:33.256755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.912 [2024-12-05 14:03:33.256761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.912 [2024-12-05 14:03:33.256768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.912 [2024-12-05 14:03:33.256781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.912 qpair failed and we were unable to recover it. 00:31:50.912 [2024-12-05 14:03:33.266667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.912 [2024-12-05 14:03:33.266719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.912 [2024-12-05 14:03:33.266732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.912 [2024-12-05 14:03:33.266739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.912 [2024-12-05 14:03:33.266745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.912 [2024-12-05 14:03:33.266758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.912 qpair failed and we were unable to recover it. 00:31:50.912 [2024-12-05 14:03:33.276684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.912 [2024-12-05 14:03:33.276735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.912 [2024-12-05 14:03:33.276749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.912 [2024-12-05 14:03:33.276755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.912 [2024-12-05 14:03:33.276761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.912 [2024-12-05 14:03:33.276775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.912 qpair failed and we were unable to recover it. 00:31:50.912 [2024-12-05 14:03:33.286731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.912 [2024-12-05 14:03:33.286788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.912 [2024-12-05 14:03:33.286802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.912 [2024-12-05 14:03:33.286809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.912 [2024-12-05 14:03:33.286818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.912 [2024-12-05 14:03:33.286832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.912 qpair failed and we were unable to recover it. 00:31:50.912 [2024-12-05 14:03:33.296766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.912 [2024-12-05 14:03:33.296859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.912 [2024-12-05 14:03:33.296872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.912 [2024-12-05 14:03:33.296878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.912 [2024-12-05 14:03:33.296884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.912 [2024-12-05 14:03:33.296898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.912 qpair failed and we were unable to recover it. 00:31:50.912 [2024-12-05 14:03:33.306804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.912 [2024-12-05 14:03:33.306856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.912 [2024-12-05 14:03:33.306870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.912 [2024-12-05 14:03:33.306876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.912 [2024-12-05 14:03:33.306882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.912 [2024-12-05 14:03:33.306896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.912 qpair failed and we were unable to recover it. 00:31:50.912 [2024-12-05 14:03:33.316858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.912 [2024-12-05 14:03:33.316921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.912 [2024-12-05 14:03:33.316934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.912 [2024-12-05 14:03:33.316941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.912 [2024-12-05 14:03:33.316946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.912 [2024-12-05 14:03:33.316960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.912 qpair failed and we were unable to recover it. 00:31:50.912 [2024-12-05 14:03:33.326875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.912 [2024-12-05 14:03:33.326957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.912 [2024-12-05 14:03:33.326970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.912 [2024-12-05 14:03:33.326978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.912 [2024-12-05 14:03:33.326986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.912 [2024-12-05 14:03:33.327001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.912 qpair failed and we were unable to recover it. 00:31:50.912 [2024-12-05 14:03:33.336885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.912 [2024-12-05 14:03:33.336941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.912 [2024-12-05 14:03:33.336955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.912 [2024-12-05 14:03:33.336962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.912 [2024-12-05 14:03:33.336968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.912 [2024-12-05 14:03:33.336982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.912 qpair failed and we were unable to recover it. 00:31:50.912 [2024-12-05 14:03:33.346915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.912 [2024-12-05 14:03:33.346977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.912 [2024-12-05 14:03:33.346991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.912 [2024-12-05 14:03:33.346998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.912 [2024-12-05 14:03:33.347004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.912 [2024-12-05 14:03:33.347019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.912 qpair failed and we were unable to recover it. 00:31:50.912 [2024-12-05 14:03:33.356935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.912 [2024-12-05 14:03:33.357020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.912 [2024-12-05 14:03:33.357033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.912 [2024-12-05 14:03:33.357040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.912 [2024-12-05 14:03:33.357046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.912 [2024-12-05 14:03:33.357060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.912 qpair failed and we were unable to recover it. 00:31:50.912 [2024-12-05 14:03:33.366987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.912 [2024-12-05 14:03:33.367040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.912 [2024-12-05 14:03:33.367054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.912 [2024-12-05 14:03:33.367061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.912 [2024-12-05 14:03:33.367066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.912 [2024-12-05 14:03:33.367080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.912 qpair failed and we were unable to recover it. 00:31:50.913 [2024-12-05 14:03:33.377011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.913 [2024-12-05 14:03:33.377067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.913 [2024-12-05 14:03:33.377083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.913 [2024-12-05 14:03:33.377090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.913 [2024-12-05 14:03:33.377096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.913 [2024-12-05 14:03:33.377110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.913 qpair failed and we were unable to recover it. 00:31:50.913 [2024-12-05 14:03:33.387026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.913 [2024-12-05 14:03:33.387082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.913 [2024-12-05 14:03:33.387096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.913 [2024-12-05 14:03:33.387102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.913 [2024-12-05 14:03:33.387108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.913 [2024-12-05 14:03:33.387121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.913 qpair failed and we were unable to recover it. 00:31:50.913 [2024-12-05 14:03:33.397095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.913 [2024-12-05 14:03:33.397168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.913 [2024-12-05 14:03:33.397182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.913 [2024-12-05 14:03:33.397189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.913 [2024-12-05 14:03:33.397194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.913 [2024-12-05 14:03:33.397208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.913 qpair failed and we were unable to recover it. 00:31:50.913 [2024-12-05 14:03:33.407115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.913 [2024-12-05 14:03:33.407170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.913 [2024-12-05 14:03:33.407185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.913 [2024-12-05 14:03:33.407191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.913 [2024-12-05 14:03:33.407197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.913 [2024-12-05 14:03:33.407212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.913 qpair failed and we were unable to recover it. 00:31:50.913 [2024-12-05 14:03:33.417136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.913 [2024-12-05 14:03:33.417188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.913 [2024-12-05 14:03:33.417202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.913 [2024-12-05 14:03:33.417209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.913 [2024-12-05 14:03:33.417215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.913 [2024-12-05 14:03:33.417234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.913 qpair failed and we were unable to recover it. 00:31:50.913 [2024-12-05 14:03:33.427142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.913 [2024-12-05 14:03:33.427239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.913 [2024-12-05 14:03:33.427253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.913 [2024-12-05 14:03:33.427260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.913 [2024-12-05 14:03:33.427265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.913 [2024-12-05 14:03:33.427280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.913 qpair failed and we were unable to recover it. 00:31:50.913 [2024-12-05 14:03:33.437201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.913 [2024-12-05 14:03:33.437261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.913 [2024-12-05 14:03:33.437274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.913 [2024-12-05 14:03:33.437281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.913 [2024-12-05 14:03:33.437287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.913 [2024-12-05 14:03:33.437301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.913 qpair failed and we were unable to recover it. 00:31:50.913 [2024-12-05 14:03:33.447241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.913 [2024-12-05 14:03:33.447295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.913 [2024-12-05 14:03:33.447311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.913 [2024-12-05 14:03:33.447318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.913 [2024-12-05 14:03:33.447325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.913 [2024-12-05 14:03:33.447340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.913 qpair failed and we were unable to recover it. 00:31:50.913 [2024-12-05 14:03:33.457261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.913 [2024-12-05 14:03:33.457347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.913 [2024-12-05 14:03:33.457361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.913 [2024-12-05 14:03:33.457372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.913 [2024-12-05 14:03:33.457378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.913 [2024-12-05 14:03:33.457393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.913 qpair failed and we were unable to recover it. 00:31:50.913 [2024-12-05 14:03:33.467282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.913 [2024-12-05 14:03:33.467359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.913 [2024-12-05 14:03:33.467378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.913 [2024-12-05 14:03:33.467384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.913 [2024-12-05 14:03:33.467390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.913 [2024-12-05 14:03:33.467404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.913 qpair failed and we were unable to recover it. 00:31:50.913 [2024-12-05 14:03:33.477286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.913 [2024-12-05 14:03:33.477362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.913 [2024-12-05 14:03:33.477382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.913 [2024-12-05 14:03:33.477388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.913 [2024-12-05 14:03:33.477394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.913 [2024-12-05 14:03:33.477409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.913 qpair failed and we were unable to recover it. 00:31:50.913 [2024-12-05 14:03:33.487404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:50.913 [2024-12-05 14:03:33.487506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:50.913 [2024-12-05 14:03:33.487520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:50.913 [2024-12-05 14:03:33.487528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:50.913 [2024-12-05 14:03:33.487533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:50.913 [2024-12-05 14:03:33.487549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:50.913 qpair failed and we were unable to recover it. 00:31:51.175 [2024-12-05 14:03:33.497291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.175 [2024-12-05 14:03:33.497344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.175 [2024-12-05 14:03:33.497358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.175 [2024-12-05 14:03:33.497365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.175 [2024-12-05 14:03:33.497376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.175 [2024-12-05 14:03:33.497390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.175 qpair failed and we were unable to recover it. 00:31:51.175 [2024-12-05 14:03:33.507379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.176 [2024-12-05 14:03:33.507455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.176 [2024-12-05 14:03:33.507472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.176 [2024-12-05 14:03:33.507479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.176 [2024-12-05 14:03:33.507485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.176 [2024-12-05 14:03:33.507499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.176 qpair failed and we were unable to recover it. 00:31:51.176 [2024-12-05 14:03:33.517421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.176 [2024-12-05 14:03:33.517508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.176 [2024-12-05 14:03:33.517522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.176 [2024-12-05 14:03:33.517528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.176 [2024-12-05 14:03:33.517534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.176 [2024-12-05 14:03:33.517548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.176 qpair failed and we were unable to recover it. 00:31:51.176 [2024-12-05 14:03:33.527440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.176 [2024-12-05 14:03:33.527496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.176 [2024-12-05 14:03:33.527509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.176 [2024-12-05 14:03:33.527515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.176 [2024-12-05 14:03:33.527522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.176 [2024-12-05 14:03:33.527536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.176 qpair failed and we were unable to recover it. 00:31:51.176 [2024-12-05 14:03:33.537390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.176 [2024-12-05 14:03:33.537448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.176 [2024-12-05 14:03:33.537462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.176 [2024-12-05 14:03:33.537469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.176 [2024-12-05 14:03:33.537475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.176 [2024-12-05 14:03:33.537489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.176 qpair failed and we were unable to recover it. 00:31:51.176 [2024-12-05 14:03:33.547476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.176 [2024-12-05 14:03:33.547532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.176 [2024-12-05 14:03:33.547546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.176 [2024-12-05 14:03:33.547553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.176 [2024-12-05 14:03:33.547559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.176 [2024-12-05 14:03:33.547577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.176 qpair failed and we were unable to recover it. 00:31:51.176 [2024-12-05 14:03:33.557554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.176 [2024-12-05 14:03:33.557610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.176 [2024-12-05 14:03:33.557624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.176 [2024-12-05 14:03:33.557631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.176 [2024-12-05 14:03:33.557636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.176 [2024-12-05 14:03:33.557651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.176 qpair failed and we were unable to recover it. 00:31:51.176 [2024-12-05 14:03:33.567580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.176 [2024-12-05 14:03:33.567660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.176 [2024-12-05 14:03:33.567674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.176 [2024-12-05 14:03:33.567680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.176 [2024-12-05 14:03:33.567686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.176 [2024-12-05 14:03:33.567701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.176 qpair failed and we were unable to recover it. 00:31:51.176 [2024-12-05 14:03:33.577562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.176 [2024-12-05 14:03:33.577617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.176 [2024-12-05 14:03:33.577630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.176 [2024-12-05 14:03:33.577637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.176 [2024-12-05 14:03:33.577642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.176 [2024-12-05 14:03:33.577656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.176 qpair failed and we were unable to recover it. 00:31:51.176 [2024-12-05 14:03:33.587579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.176 [2024-12-05 14:03:33.587633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.176 [2024-12-05 14:03:33.587646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.176 [2024-12-05 14:03:33.587653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.176 [2024-12-05 14:03:33.587658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.176 [2024-12-05 14:03:33.587673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.176 qpair failed and we were unable to recover it. 00:31:51.176 [2024-12-05 14:03:33.597617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.176 [2024-12-05 14:03:33.597688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.176 [2024-12-05 14:03:33.597702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.176 [2024-12-05 14:03:33.597708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.176 [2024-12-05 14:03:33.597714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.176 [2024-12-05 14:03:33.597728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.176 qpair failed and we were unable to recover it. 00:31:51.176 [2024-12-05 14:03:33.607597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.176 [2024-12-05 14:03:33.607653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.176 [2024-12-05 14:03:33.607667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.176 [2024-12-05 14:03:33.607673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.176 [2024-12-05 14:03:33.607680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.176 [2024-12-05 14:03:33.607694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.176 qpair failed and we were unable to recover it. 00:31:51.176 [2024-12-05 14:03:33.617668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.176 [2024-12-05 14:03:33.617725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.176 [2024-12-05 14:03:33.617739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.176 [2024-12-05 14:03:33.617746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.176 [2024-12-05 14:03:33.617752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.176 [2024-12-05 14:03:33.617766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.176 qpair failed and we were unable to recover it. 00:31:51.176 [2024-12-05 14:03:33.627654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.176 [2024-12-05 14:03:33.627739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.176 [2024-12-05 14:03:33.627753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.176 [2024-12-05 14:03:33.627759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.176 [2024-12-05 14:03:33.627765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.176 [2024-12-05 14:03:33.627779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.176 qpair failed and we were unable to recover it. 00:31:51.176 [2024-12-05 14:03:33.637657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.177 [2024-12-05 14:03:33.637715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.177 [2024-12-05 14:03:33.637731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.177 [2024-12-05 14:03:33.637738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.177 [2024-12-05 14:03:33.637743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.177 [2024-12-05 14:03:33.637757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.177 qpair failed and we were unable to recover it. 00:31:51.177 [2024-12-05 14:03:33.647730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.177 [2024-12-05 14:03:33.647814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.177 [2024-12-05 14:03:33.647828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.177 [2024-12-05 14:03:33.647834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.177 [2024-12-05 14:03:33.647839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.177 [2024-12-05 14:03:33.647853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.177 qpair failed and we were unable to recover it. 00:31:51.177 [2024-12-05 14:03:33.657717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.177 [2024-12-05 14:03:33.657771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.177 [2024-12-05 14:03:33.657784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.177 [2024-12-05 14:03:33.657791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.177 [2024-12-05 14:03:33.657796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.177 [2024-12-05 14:03:33.657810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.177 qpair failed and we were unable to recover it. 00:31:51.177 [2024-12-05 14:03:33.667793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.177 [2024-12-05 14:03:33.667850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.177 [2024-12-05 14:03:33.667866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.177 [2024-12-05 14:03:33.667873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.177 [2024-12-05 14:03:33.667881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.177 [2024-12-05 14:03:33.667896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.177 qpair failed and we were unable to recover it. 00:31:51.177 [2024-12-05 14:03:33.677811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.177 [2024-12-05 14:03:33.677897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.177 [2024-12-05 14:03:33.677910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.177 [2024-12-05 14:03:33.677917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.177 [2024-12-05 14:03:33.677923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.177 [2024-12-05 14:03:33.677940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.177 qpair failed and we were unable to recover it. 00:31:51.177 [2024-12-05 14:03:33.687861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.177 [2024-12-05 14:03:33.687912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.177 [2024-12-05 14:03:33.687926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.177 [2024-12-05 14:03:33.687933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.177 [2024-12-05 14:03:33.687939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.177 [2024-12-05 14:03:33.687953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.177 qpair failed and we were unable to recover it. 00:31:51.177 [2024-12-05 14:03:33.697957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.177 [2024-12-05 14:03:33.698009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.177 [2024-12-05 14:03:33.698022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.177 [2024-12-05 14:03:33.698028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.177 [2024-12-05 14:03:33.698034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.177 [2024-12-05 14:03:33.698049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.177 qpair failed and we were unable to recover it. 00:31:51.177 [2024-12-05 14:03:33.707911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.177 [2024-12-05 14:03:33.708005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.177 [2024-12-05 14:03:33.708019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.177 [2024-12-05 14:03:33.708025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.177 [2024-12-05 14:03:33.708031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.177 [2024-12-05 14:03:33.708045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.177 qpair failed and we were unable to recover it. 00:31:51.177 [2024-12-05 14:03:33.717922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.177 [2024-12-05 14:03:33.717969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.177 [2024-12-05 14:03:33.717983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.177 [2024-12-05 14:03:33.717990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.177 [2024-12-05 14:03:33.717995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.177 [2024-12-05 14:03:33.718009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.177 qpair failed and we were unable to recover it. 00:31:51.177 [2024-12-05 14:03:33.728020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.177 [2024-12-05 14:03:33.728075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.177 [2024-12-05 14:03:33.728090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.177 [2024-12-05 14:03:33.728096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.177 [2024-12-05 14:03:33.728102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.177 [2024-12-05 14:03:33.728116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.177 qpair failed and we were unable to recover it. 00:31:51.177 [2024-12-05 14:03:33.738023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.177 [2024-12-05 14:03:33.738075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.177 [2024-12-05 14:03:33.738088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.177 [2024-12-05 14:03:33.738094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.177 [2024-12-05 14:03:33.738100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.177 [2024-12-05 14:03:33.738114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.177 qpair failed and we were unable to recover it. 00:31:51.177 [2024-12-05 14:03:33.748074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.177 [2024-12-05 14:03:33.748129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.177 [2024-12-05 14:03:33.748142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.177 [2024-12-05 14:03:33.748148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.177 [2024-12-05 14:03:33.748154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.177 [2024-12-05 14:03:33.748168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.177 qpair failed and we were unable to recover it. 00:31:51.177 [2024-12-05 14:03:33.757992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.177 [2024-12-05 14:03:33.758046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.177 [2024-12-05 14:03:33.758061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.177 [2024-12-05 14:03:33.758068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.177 [2024-12-05 14:03:33.758073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.177 [2024-12-05 14:03:33.758088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.177 qpair failed and we were unable to recover it. 00:31:51.439 [2024-12-05 14:03:33.768086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.439 [2024-12-05 14:03:33.768144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.439 [2024-12-05 14:03:33.768161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.439 [2024-12-05 14:03:33.768167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.439 [2024-12-05 14:03:33.768173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.439 [2024-12-05 14:03:33.768188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.439 qpair failed and we were unable to recover it. 00:31:51.439 [2024-12-05 14:03:33.778132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.439 [2024-12-05 14:03:33.778184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.439 [2024-12-05 14:03:33.778198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.439 [2024-12-05 14:03:33.778204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.439 [2024-12-05 14:03:33.778210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.439 [2024-12-05 14:03:33.778225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.439 qpair failed and we were unable to recover it. 00:31:51.439 [2024-12-05 14:03:33.788150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.439 [2024-12-05 14:03:33.788205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.439 [2024-12-05 14:03:33.788219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.439 [2024-12-05 14:03:33.788226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.439 [2024-12-05 14:03:33.788232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.439 [2024-12-05 14:03:33.788245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.439 qpair failed and we were unable to recover it. 00:31:51.439 [2024-12-05 14:03:33.798117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.439 [2024-12-05 14:03:33.798173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.439 [2024-12-05 14:03:33.798187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.439 [2024-12-05 14:03:33.798193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.439 [2024-12-05 14:03:33.798199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.439 [2024-12-05 14:03:33.798213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.439 qpair failed and we were unable to recover it. 00:31:51.439 [2024-12-05 14:03:33.808204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.439 [2024-12-05 14:03:33.808256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.439 [2024-12-05 14:03:33.808271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.439 [2024-12-05 14:03:33.808277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.439 [2024-12-05 14:03:33.808283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.439 [2024-12-05 14:03:33.808301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.439 qpair failed and we were unable to recover it. 00:31:51.439 [2024-12-05 14:03:33.818242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.439 [2024-12-05 14:03:33.818300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.439 [2024-12-05 14:03:33.818313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.439 [2024-12-05 14:03:33.818320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.439 [2024-12-05 14:03:33.818326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.439 [2024-12-05 14:03:33.818340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.439 qpair failed and we were unable to recover it. 00:31:51.439 [2024-12-05 14:03:33.828223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.828278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.440 [2024-12-05 14:03:33.828293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.440 [2024-12-05 14:03:33.828300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.440 [2024-12-05 14:03:33.828307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.440 [2024-12-05 14:03:33.828322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.440 qpair failed and we were unable to recover it. 00:31:51.440 [2024-12-05 14:03:33.838296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.838365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.440 [2024-12-05 14:03:33.838383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.440 [2024-12-05 14:03:33.838390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.440 [2024-12-05 14:03:33.838396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.440 [2024-12-05 14:03:33.838410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.440 qpair failed and we were unable to recover it. 00:31:51.440 [2024-12-05 14:03:33.848360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.848421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.440 [2024-12-05 14:03:33.848435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.440 [2024-12-05 14:03:33.848441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.440 [2024-12-05 14:03:33.848447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.440 [2024-12-05 14:03:33.848461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.440 qpair failed and we were unable to recover it. 00:31:51.440 [2024-12-05 14:03:33.858357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.858424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.440 [2024-12-05 14:03:33.858439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.440 [2024-12-05 14:03:33.858445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.440 [2024-12-05 14:03:33.858451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.440 [2024-12-05 14:03:33.858466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.440 qpair failed and we were unable to recover it. 00:31:51.440 [2024-12-05 14:03:33.868316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.868379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.440 [2024-12-05 14:03:33.868393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.440 [2024-12-05 14:03:33.868399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.440 [2024-12-05 14:03:33.868405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.440 [2024-12-05 14:03:33.868420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.440 qpair failed and we were unable to recover it. 00:31:51.440 [2024-12-05 14:03:33.878415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.878464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.440 [2024-12-05 14:03:33.878477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.440 [2024-12-05 14:03:33.878484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.440 [2024-12-05 14:03:33.878490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.440 [2024-12-05 14:03:33.878504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.440 qpair failed and we were unable to recover it. 00:31:51.440 [2024-12-05 14:03:33.888471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.888569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.440 [2024-12-05 14:03:33.888582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.440 [2024-12-05 14:03:33.888589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.440 [2024-12-05 14:03:33.888595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.440 [2024-12-05 14:03:33.888609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.440 qpair failed and we were unable to recover it. 00:31:51.440 [2024-12-05 14:03:33.898477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.898527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.440 [2024-12-05 14:03:33.898543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.440 [2024-12-05 14:03:33.898550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.440 [2024-12-05 14:03:33.898556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.440 [2024-12-05 14:03:33.898570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.440 qpair failed and we were unable to recover it. 00:31:51.440 [2024-12-05 14:03:33.908499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.908554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.440 [2024-12-05 14:03:33.908567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.440 [2024-12-05 14:03:33.908573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.440 [2024-12-05 14:03:33.908579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.440 [2024-12-05 14:03:33.908594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.440 qpair failed and we were unable to recover it. 00:31:51.440 [2024-12-05 14:03:33.918518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.918569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.440 [2024-12-05 14:03:33.918584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.440 [2024-12-05 14:03:33.918590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.440 [2024-12-05 14:03:33.918596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.440 [2024-12-05 14:03:33.918610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.440 qpair failed and we were unable to recover it. 00:31:51.440 [2024-12-05 14:03:33.928605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.928661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.440 [2024-12-05 14:03:33.928675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.440 [2024-12-05 14:03:33.928681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.440 [2024-12-05 14:03:33.928687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.440 [2024-12-05 14:03:33.928701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.440 qpair failed and we were unable to recover it. 00:31:51.440 [2024-12-05 14:03:33.938644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.938705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.440 [2024-12-05 14:03:33.938719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.440 [2024-12-05 14:03:33.938725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.440 [2024-12-05 14:03:33.938731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.440 [2024-12-05 14:03:33.938748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.440 qpair failed and we were unable to recover it. 00:31:51.440 [2024-12-05 14:03:33.948555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.948644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.440 [2024-12-05 14:03:33.948658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.440 [2024-12-05 14:03:33.948664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.440 [2024-12-05 14:03:33.948669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.440 [2024-12-05 14:03:33.948684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.440 qpair failed and we were unable to recover it. 00:31:51.440 [2024-12-05 14:03:33.958628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.440 [2024-12-05 14:03:33.958682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.441 [2024-12-05 14:03:33.958697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.441 [2024-12-05 14:03:33.958703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.441 [2024-12-05 14:03:33.958709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.441 [2024-12-05 14:03:33.958723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.441 qpair failed and we were unable to recover it. 00:31:51.441 [2024-12-05 14:03:33.968677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.441 [2024-12-05 14:03:33.968734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.441 [2024-12-05 14:03:33.968748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.441 [2024-12-05 14:03:33.968755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.441 [2024-12-05 14:03:33.968761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.441 [2024-12-05 14:03:33.968775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.441 qpair failed and we were unable to recover it. 00:31:51.441 [2024-12-05 14:03:33.978668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.441 [2024-12-05 14:03:33.978726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.441 [2024-12-05 14:03:33.978739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.441 [2024-12-05 14:03:33.978746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.441 [2024-12-05 14:03:33.978751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.441 [2024-12-05 14:03:33.978766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.441 qpair failed and we were unable to recover it. 00:31:51.441 [2024-12-05 14:03:33.988740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.441 [2024-12-05 14:03:33.988801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.441 [2024-12-05 14:03:33.988816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.441 [2024-12-05 14:03:33.988822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.441 [2024-12-05 14:03:33.988828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.441 [2024-12-05 14:03:33.988842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.441 qpair failed and we were unable to recover it. 00:31:51.441 [2024-12-05 14:03:33.998786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.441 [2024-12-05 14:03:33.998841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.441 [2024-12-05 14:03:33.998855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.441 [2024-12-05 14:03:33.998861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.441 [2024-12-05 14:03:33.998867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.441 [2024-12-05 14:03:33.998881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.441 qpair failed and we were unable to recover it. 00:31:51.441 [2024-12-05 14:03:34.008770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.441 [2024-12-05 14:03:34.008827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.441 [2024-12-05 14:03:34.008842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.441 [2024-12-05 14:03:34.008849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.441 [2024-12-05 14:03:34.008855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.441 [2024-12-05 14:03:34.008869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.441 qpair failed and we were unable to recover it. 00:31:51.441 [2024-12-05 14:03:34.018850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.441 [2024-12-05 14:03:34.018951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.441 [2024-12-05 14:03:34.018965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.441 [2024-12-05 14:03:34.018971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.441 [2024-12-05 14:03:34.018978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.441 [2024-12-05 14:03:34.018992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.441 qpair failed and we were unable to recover it. 00:31:51.702 [2024-12-05 14:03:34.028821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.702 [2024-12-05 14:03:34.028874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.702 [2024-12-05 14:03:34.028890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.702 [2024-12-05 14:03:34.028897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.702 [2024-12-05 14:03:34.028903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.702 [2024-12-05 14:03:34.028916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.702 qpair failed and we were unable to recover it. 00:31:51.702 [2024-12-05 14:03:34.038859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.702 [2024-12-05 14:03:34.038914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.702 [2024-12-05 14:03:34.038927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.702 [2024-12-05 14:03:34.038933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.702 [2024-12-05 14:03:34.038939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.702 [2024-12-05 14:03:34.038953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.702 qpair failed and we were unable to recover it. 00:31:51.702 [2024-12-05 14:03:34.048894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.702 [2024-12-05 14:03:34.048948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.702 [2024-12-05 14:03:34.048961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.702 [2024-12-05 14:03:34.048968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.702 [2024-12-05 14:03:34.048974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.702 [2024-12-05 14:03:34.048987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.702 qpair failed and we were unable to recover it. 00:31:51.702 [2024-12-05 14:03:34.058959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.702 [2024-12-05 14:03:34.059025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.702 [2024-12-05 14:03:34.059038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.702 [2024-12-05 14:03:34.059045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.702 [2024-12-05 14:03:34.059051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.702 [2024-12-05 14:03:34.059065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.702 qpair failed and we were unable to recover it. 00:31:51.702 [2024-12-05 14:03:34.068937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.702 [2024-12-05 14:03:34.068988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.702 [2024-12-05 14:03:34.069002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.702 [2024-12-05 14:03:34.069009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.702 [2024-12-05 14:03:34.069015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.702 [2024-12-05 14:03:34.069034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.702 qpair failed and we were unable to recover it. 00:31:51.702 [2024-12-05 14:03:34.078986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.702 [2024-12-05 14:03:34.079036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.702 [2024-12-05 14:03:34.079050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.702 [2024-12-05 14:03:34.079057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.702 [2024-12-05 14:03:34.079063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.702 [2024-12-05 14:03:34.079077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.702 qpair failed and we were unable to recover it. 00:31:51.702 [2024-12-05 14:03:34.089035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.702 [2024-12-05 14:03:34.089121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.702 [2024-12-05 14:03:34.089136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.702 [2024-12-05 14:03:34.089143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.702 [2024-12-05 14:03:34.089150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.702 [2024-12-05 14:03:34.089164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.702 qpair failed and we were unable to recover it. 00:31:51.702 [2024-12-05 14:03:34.099043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.702 [2024-12-05 14:03:34.099097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.702 [2024-12-05 14:03:34.099111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.702 [2024-12-05 14:03:34.099117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.702 [2024-12-05 14:03:34.099123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.702 [2024-12-05 14:03:34.099137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.702 qpair failed and we were unable to recover it. 00:31:51.702 [2024-12-05 14:03:34.109043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.702 [2024-12-05 14:03:34.109092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.702 [2024-12-05 14:03:34.109105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.702 [2024-12-05 14:03:34.109111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.702 [2024-12-05 14:03:34.109117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.702 [2024-12-05 14:03:34.109131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.702 qpair failed and we were unable to recover it. 00:31:51.702 [2024-12-05 14:03:34.119095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.702 [2024-12-05 14:03:34.119182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.702 [2024-12-05 14:03:34.119195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.702 [2024-12-05 14:03:34.119202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.702 [2024-12-05 14:03:34.119207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.703 [2024-12-05 14:03:34.119221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.703 qpair failed and we were unable to recover it. 00:31:51.703 [2024-12-05 14:03:34.129130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.703 [2024-12-05 14:03:34.129183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.703 [2024-12-05 14:03:34.129197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.703 [2024-12-05 14:03:34.129203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.703 [2024-12-05 14:03:34.129209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.703 [2024-12-05 14:03:34.129223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.703 qpair failed and we were unable to recover it. 00:31:51.703 [2024-12-05 14:03:34.139213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.703 [2024-12-05 14:03:34.139304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.703 [2024-12-05 14:03:34.139318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.703 [2024-12-05 14:03:34.139324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.703 [2024-12-05 14:03:34.139330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.703 [2024-12-05 14:03:34.139345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.703 qpair failed and we were unable to recover it. 00:31:51.703 [2024-12-05 14:03:34.149167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.703 [2024-12-05 14:03:34.149252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.703 [2024-12-05 14:03:34.149266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.703 [2024-12-05 14:03:34.149273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.703 [2024-12-05 14:03:34.149279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.703 [2024-12-05 14:03:34.149293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.703 qpair failed and we were unable to recover it. 00:31:51.703 [2024-12-05 14:03:34.159198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.703 [2024-12-05 14:03:34.159252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.703 [2024-12-05 14:03:34.159269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.703 [2024-12-05 14:03:34.159275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.703 [2024-12-05 14:03:34.159281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.703 [2024-12-05 14:03:34.159295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.703 qpair failed and we were unable to recover it. 00:31:51.703 [2024-12-05 14:03:34.169269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.703 [2024-12-05 14:03:34.169325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.703 [2024-12-05 14:03:34.169339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.703 [2024-12-05 14:03:34.169346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.703 [2024-12-05 14:03:34.169352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.703 [2024-12-05 14:03:34.169372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.703 qpair failed and we were unable to recover it. 00:31:51.703 [2024-12-05 14:03:34.179243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.703 [2024-12-05 14:03:34.179305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.703 [2024-12-05 14:03:34.179318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.703 [2024-12-05 14:03:34.179325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.703 [2024-12-05 14:03:34.179331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.703 [2024-12-05 14:03:34.179345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.703 qpair failed and we were unable to recover it. 00:31:51.703 [2024-12-05 14:03:34.189335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.703 [2024-12-05 14:03:34.189399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.703 [2024-12-05 14:03:34.189413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.703 [2024-12-05 14:03:34.189420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.703 [2024-12-05 14:03:34.189426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.703 [2024-12-05 14:03:34.189440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.703 qpair failed and we were unable to recover it. 00:31:51.703 [2024-12-05 14:03:34.199262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.703 [2024-12-05 14:03:34.199348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.703 [2024-12-05 14:03:34.199362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.703 [2024-12-05 14:03:34.199371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.703 [2024-12-05 14:03:34.199378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.703 [2024-12-05 14:03:34.199394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.703 qpair failed and we were unable to recover it. 00:31:51.703 [2024-12-05 14:03:34.209349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.703 [2024-12-05 14:03:34.209407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.703 [2024-12-05 14:03:34.209422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.703 [2024-12-05 14:03:34.209429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.703 [2024-12-05 14:03:34.209435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.703 [2024-12-05 14:03:34.209450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.703 qpair failed and we were unable to recover it. 00:31:51.703 [2024-12-05 14:03:34.219364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.703 [2024-12-05 14:03:34.219423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.703 [2024-12-05 14:03:34.219437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.703 [2024-12-05 14:03:34.219444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.703 [2024-12-05 14:03:34.219450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.703 [2024-12-05 14:03:34.219465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.703 qpair failed and we were unable to recover it. 00:31:51.703 [2024-12-05 14:03:34.229431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.703 [2024-12-05 14:03:34.229486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.703 [2024-12-05 14:03:34.229499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.703 [2024-12-05 14:03:34.229505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.703 [2024-12-05 14:03:34.229511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.703 [2024-12-05 14:03:34.229525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.703 qpair failed and we were unable to recover it. 00:31:51.703 [2024-12-05 14:03:34.239424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.703 [2024-12-05 14:03:34.239476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.703 [2024-12-05 14:03:34.239489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.703 [2024-12-05 14:03:34.239496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.703 [2024-12-05 14:03:34.239502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.703 [2024-12-05 14:03:34.239516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.703 qpair failed and we were unable to recover it. 00:31:51.703 [2024-12-05 14:03:34.249462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.703 [2024-12-05 14:03:34.249519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.703 [2024-12-05 14:03:34.249532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.703 [2024-12-05 14:03:34.249538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.703 [2024-12-05 14:03:34.249544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.704 [2024-12-05 14:03:34.249558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.704 qpair failed and we were unable to recover it. 00:31:51.704 [2024-12-05 14:03:34.259499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.704 [2024-12-05 14:03:34.259555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.704 [2024-12-05 14:03:34.259569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.704 [2024-12-05 14:03:34.259575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.704 [2024-12-05 14:03:34.259581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.704 [2024-12-05 14:03:34.259595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.704 qpair failed and we were unable to recover it. 00:31:51.704 [2024-12-05 14:03:34.269524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.704 [2024-12-05 14:03:34.269573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.704 [2024-12-05 14:03:34.269587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.704 [2024-12-05 14:03:34.269593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.704 [2024-12-05 14:03:34.269599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.704 [2024-12-05 14:03:34.269612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.704 qpair failed and we were unable to recover it. 00:31:51.704 [2024-12-05 14:03:34.279531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.704 [2024-12-05 14:03:34.279587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.704 [2024-12-05 14:03:34.279601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.704 [2024-12-05 14:03:34.279607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.704 [2024-12-05 14:03:34.279613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.704 [2024-12-05 14:03:34.279626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.704 qpair failed and we were unable to recover it. 00:31:51.964 [2024-12-05 14:03:34.289619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.964 [2024-12-05 14:03:34.289675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.964 [2024-12-05 14:03:34.289692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.964 [2024-12-05 14:03:34.289698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.964 [2024-12-05 14:03:34.289704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.964 [2024-12-05 14:03:34.289718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.964 qpair failed and we were unable to recover it. 00:31:51.964 [2024-12-05 14:03:34.299646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.964 [2024-12-05 14:03:34.299749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.964 [2024-12-05 14:03:34.299762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.964 [2024-12-05 14:03:34.299768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.964 [2024-12-05 14:03:34.299774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.964 [2024-12-05 14:03:34.299788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.964 qpair failed and we were unable to recover it. 00:31:51.964 [2024-12-05 14:03:34.309549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.964 [2024-12-05 14:03:34.309644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.964 [2024-12-05 14:03:34.309657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.964 [2024-12-05 14:03:34.309663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.964 [2024-12-05 14:03:34.309669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.964 [2024-12-05 14:03:34.309683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.964 qpair failed and we were unable to recover it. 00:31:51.964 [2024-12-05 14:03:34.319642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.964 [2024-12-05 14:03:34.319693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.964 [2024-12-05 14:03:34.319706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.964 [2024-12-05 14:03:34.319713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.964 [2024-12-05 14:03:34.319719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.964 [2024-12-05 14:03:34.319733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.964 qpair failed and we were unable to recover it. 00:31:51.964 [2024-12-05 14:03:34.329686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.964 [2024-12-05 14:03:34.329746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.964 [2024-12-05 14:03:34.329761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.964 [2024-12-05 14:03:34.329767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.964 [2024-12-05 14:03:34.329773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.964 [2024-12-05 14:03:34.329790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.964 qpair failed and we were unable to recover it. 00:31:51.964 [2024-12-05 14:03:34.339707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.964 [2024-12-05 14:03:34.339780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.964 [2024-12-05 14:03:34.339793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.964 [2024-12-05 14:03:34.339800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.964 [2024-12-05 14:03:34.339806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.964 [2024-12-05 14:03:34.339820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.964 qpair failed and we were unable to recover it. 00:31:51.964 [2024-12-05 14:03:34.349723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.964 [2024-12-05 14:03:34.349824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.964 [2024-12-05 14:03:34.349837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.964 [2024-12-05 14:03:34.349844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.964 [2024-12-05 14:03:34.349849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.964 [2024-12-05 14:03:34.349863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.964 qpair failed and we were unable to recover it. 00:31:51.965 [2024-12-05 14:03:34.359764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.965 [2024-12-05 14:03:34.359840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.965 [2024-12-05 14:03:34.359853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.965 [2024-12-05 14:03:34.359860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.965 [2024-12-05 14:03:34.359866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.965 [2024-12-05 14:03:34.359879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.965 qpair failed and we were unable to recover it. 00:31:51.965 [2024-12-05 14:03:34.369806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.965 [2024-12-05 14:03:34.369860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.965 [2024-12-05 14:03:34.369873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.965 [2024-12-05 14:03:34.369879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.965 [2024-12-05 14:03:34.369885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.965 [2024-12-05 14:03:34.369899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.965 qpair failed and we were unable to recover it. 00:31:51.965 [2024-12-05 14:03:34.379799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.965 [2024-12-05 14:03:34.379855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.965 [2024-12-05 14:03:34.379870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.965 [2024-12-05 14:03:34.379877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.965 [2024-12-05 14:03:34.379882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbe5be0 00:31:51.965 [2024-12-05 14:03:34.379897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:51.965 qpair failed and we were unable to recover it. 00:31:51.965 [2024-12-05 14:03:34.389924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.965 [2024-12-05 14:03:34.390022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.965 [2024-12-05 14:03:34.390076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.965 [2024-12-05 14:03:34.390101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.965 [2024-12-05 14:03:34.390122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdb60000b90 00:31:51.965 [2024-12-05 14:03:34.390175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:51.965 qpair failed and we were unable to recover it. 00:31:51.965 [2024-12-05 14:03:34.399878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.965 [2024-12-05 14:03:34.399972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.965 [2024-12-05 14:03:34.400003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.965 [2024-12-05 14:03:34.400019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.965 [2024-12-05 14:03:34.400034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdb60000b90 00:31:51.965 [2024-12-05 14:03:34.400070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:51.965 qpair failed and we were unable to recover it. 00:31:51.965 [2024-12-05 14:03:34.400173] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:31:51.965 A controller has encountered a failure and is being reset. 00:31:51.965 [2024-12-05 14:03:34.409915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.965 [2024-12-05 14:03:34.410018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.965 [2024-12-05 14:03:34.410072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.965 [2024-12-05 14:03:34.410098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.965 [2024-12-05 14:03:34.410119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdb68000b90 00:31:51.965 [2024-12-05 14:03:34.410170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:51.965 qpair failed and we were unable to recover it. 00:31:51.965 [2024-12-05 14:03:34.419931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:51.965 [2024-12-05 14:03:34.420015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:51.965 [2024-12-05 14:03:34.420047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:51.965 [2024-12-05 14:03:34.420065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:51.965 [2024-12-05 14:03:34.420079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdb68000b90 00:31:51.965 [2024-12-05 14:03:34.420114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:51.965 qpair failed and we were unable to recover it. 00:31:51.965 Controller properly reset. 00:31:51.965 Initializing NVMe Controllers 00:31:51.965 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:51.965 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:51.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:51.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:51.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:51.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:51.965 Initialization complete. Launching workers. 00:31:51.965 Starting thread on core 1 00:31:51.965 Starting thread on core 2 00:31:51.965 Starting thread on core 3 00:31:51.965 Starting thread on core 0 00:31:51.965 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:51.965 00:31:51.965 real 0m10.847s 00:31:51.965 user 0m19.154s 00:31:51.965 sys 0m4.717s 00:31:51.965 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:51.965 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:51.965 ************************************ 00:31:51.965 END TEST nvmf_target_disconnect_tc2 00:31:51.965 ************************************ 00:31:51.965 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:51.965 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:51.965 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:51.965 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:51.965 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:51.965 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:51.965 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:51.965 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:51.965 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:51.965 rmmod nvme_tcp 00:31:52.225 rmmod nvme_fabrics 00:31:52.225 rmmod nvme_keyring 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 828186 ']' 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 828186 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 828186 ']' 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 828186 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 828186 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 828186' 00:31:52.225 killing process with pid 828186 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 828186 00:31:52.225 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 828186 00:31:52.485 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:52.485 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:52.485 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:52.485 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:52.485 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:31:52.485 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:52.485 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:31:52.485 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:52.485 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:52.485 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.485 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.485 14:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.389 14:03:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:54.389 00:31:54.389 real 0m19.636s 00:31:54.389 user 0m47.033s 00:31:54.389 sys 0m9.678s 00:31:54.389 14:03:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.389 14:03:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:54.389 ************************************ 00:31:54.389 END TEST nvmf_target_disconnect 00:31:54.389 ************************************ 00:31:54.389 14:03:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:54.389 00:31:54.389 real 5m52.506s 00:31:54.389 user 10m34.795s 00:31:54.389 sys 1m58.311s 00:31:54.389 14:03:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.389 14:03:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.389 ************************************ 00:31:54.389 END TEST nvmf_host 00:31:54.389 ************************************ 00:31:54.648 14:03:36 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:54.648 14:03:36 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:54.648 14:03:36 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:54.648 14:03:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:54.648 14:03:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.648 14:03:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:54.648 ************************************ 00:31:54.648 START TEST nvmf_target_core_interrupt_mode 00:31:54.648 ************************************ 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:54.649 * Looking for test storage... 00:31:54.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:54.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.649 --rc genhtml_branch_coverage=1 00:31:54.649 --rc genhtml_function_coverage=1 00:31:54.649 --rc genhtml_legend=1 00:31:54.649 --rc geninfo_all_blocks=1 00:31:54.649 --rc geninfo_unexecuted_blocks=1 00:31:54.649 00:31:54.649 ' 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:54.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.649 --rc genhtml_branch_coverage=1 00:31:54.649 --rc genhtml_function_coverage=1 00:31:54.649 --rc genhtml_legend=1 00:31:54.649 --rc geninfo_all_blocks=1 00:31:54.649 --rc geninfo_unexecuted_blocks=1 00:31:54.649 00:31:54.649 ' 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:54.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.649 --rc genhtml_branch_coverage=1 00:31:54.649 --rc genhtml_function_coverage=1 00:31:54.649 --rc genhtml_legend=1 00:31:54.649 --rc geninfo_all_blocks=1 00:31:54.649 --rc geninfo_unexecuted_blocks=1 00:31:54.649 00:31:54.649 ' 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:54.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.649 --rc genhtml_branch_coverage=1 00:31:54.649 --rc genhtml_function_coverage=1 00:31:54.649 --rc genhtml_legend=1 00:31:54.649 --rc geninfo_all_blocks=1 00:31:54.649 --rc geninfo_unexecuted_blocks=1 00:31:54.649 00:31:54.649 ' 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:54.649 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:54.650 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:54.650 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:54.650 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:54.650 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.650 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:54.909 ************************************ 00:31:54.909 START TEST nvmf_abort 00:31:54.909 ************************************ 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:54.910 * Looking for test storage... 00:31:54.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:54.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.910 --rc genhtml_branch_coverage=1 00:31:54.910 --rc genhtml_function_coverage=1 00:31:54.910 --rc genhtml_legend=1 00:31:54.910 --rc geninfo_all_blocks=1 00:31:54.910 --rc geninfo_unexecuted_blocks=1 00:31:54.910 00:31:54.910 ' 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:54.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.910 --rc genhtml_branch_coverage=1 00:31:54.910 --rc genhtml_function_coverage=1 00:31:54.910 --rc genhtml_legend=1 00:31:54.910 --rc geninfo_all_blocks=1 00:31:54.910 --rc geninfo_unexecuted_blocks=1 00:31:54.910 00:31:54.910 ' 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:54.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.910 --rc genhtml_branch_coverage=1 00:31:54.910 --rc genhtml_function_coverage=1 00:31:54.910 --rc genhtml_legend=1 00:31:54.910 --rc geninfo_all_blocks=1 00:31:54.910 --rc geninfo_unexecuted_blocks=1 00:31:54.910 00:31:54.910 ' 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:54.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.910 --rc genhtml_branch_coverage=1 00:31:54.910 --rc genhtml_function_coverage=1 00:31:54.910 --rc genhtml_legend=1 00:31:54.910 --rc geninfo_all_blocks=1 00:31:54.910 --rc geninfo_unexecuted_blocks=1 00:31:54.910 00:31:54.910 ' 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.910 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:31:54.911 14:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:01.483 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:01.483 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:01.483 Found net devices under 0000:86:00.0: cvl_0_0 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:01.483 Found net devices under 0000:86:00.1: cvl_0_1 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:01.483 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:01.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:01.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:32:01.484 00:32:01.484 --- 10.0.0.2 ping statistics --- 00:32:01.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.484 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:01.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:01.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:32:01.484 00:32:01.484 --- 10.0.0.1 ping statistics --- 00:32:01.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.484 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=832761 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 832761 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 832761 ']' 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:01.484 [2024-12-05 14:03:43.435825] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:01.484 [2024-12-05 14:03:43.436758] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:32:01.484 [2024-12-05 14:03:43.436793] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.484 [2024-12-05 14:03:43.514389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:01.484 [2024-12-05 14:03:43.553894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.484 [2024-12-05 14:03:43.553930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.484 [2024-12-05 14:03:43.553940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.484 [2024-12-05 14:03:43.553946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.484 [2024-12-05 14:03:43.553951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.484 [2024-12-05 14:03:43.555449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:01.484 [2024-12-05 14:03:43.555536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.484 [2024-12-05 14:03:43.555537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:01.484 [2024-12-05 14:03:43.624154] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:01.484 [2024-12-05 14:03:43.624892] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:01.484 [2024-12-05 14:03:43.624911] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:01.484 [2024-12-05 14:03:43.625114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:01.484 [2024-12-05 14:03:43.704402] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:01.484 Malloc0 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:01.484 Delay0 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:01.484 [2024-12-05 14:03:43.796363] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:01.484 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.485 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:01.485 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.485 14:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:32:01.485 [2024-12-05 14:03:43.925505] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:04.010 Initializing NVMe Controllers 00:32:04.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:04.010 controller IO queue size 128 less than required 00:32:04.011 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:32:04.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:32:04.011 Initialization complete. Launching workers. 00:32:04.011 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38238 00:32:04.011 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38295, failed to submit 66 00:32:04.011 success 38238, unsuccessful 57, failed 0 00:32:04.011 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:04.011 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.011 14:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.011 rmmod nvme_tcp 00:32:04.011 rmmod nvme_fabrics 00:32:04.011 rmmod nvme_keyring 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 832761 ']' 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 832761 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 832761 ']' 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 832761 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 832761 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 832761' 00:32:04.011 killing process with pid 832761 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 832761 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 832761 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.011 14:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.912 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:05.912 00:32:05.912 real 0m11.121s 00:32:05.912 user 0m10.328s 00:32:05.912 sys 0m5.683s 00:32:05.912 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.912 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:05.912 ************************************ 00:32:05.912 END TEST nvmf_abort 00:32:05.912 ************************************ 00:32:05.912 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:05.912 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:05.912 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:05.912 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:05.912 ************************************ 00:32:05.912 START TEST nvmf_ns_hotplug_stress 00:32:05.912 ************************************ 00:32:05.912 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:06.172 * Looking for test storage... 00:32:06.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:32:06.172 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:06.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.173 --rc genhtml_branch_coverage=1 00:32:06.173 --rc genhtml_function_coverage=1 00:32:06.173 --rc genhtml_legend=1 00:32:06.173 --rc geninfo_all_blocks=1 00:32:06.173 --rc geninfo_unexecuted_blocks=1 00:32:06.173 00:32:06.173 ' 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:06.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.173 --rc genhtml_branch_coverage=1 00:32:06.173 --rc genhtml_function_coverage=1 00:32:06.173 --rc genhtml_legend=1 00:32:06.173 --rc geninfo_all_blocks=1 00:32:06.173 --rc geninfo_unexecuted_blocks=1 00:32:06.173 00:32:06.173 ' 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:06.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.173 --rc genhtml_branch_coverage=1 00:32:06.173 --rc genhtml_function_coverage=1 00:32:06.173 --rc genhtml_legend=1 00:32:06.173 --rc geninfo_all_blocks=1 00:32:06.173 --rc geninfo_unexecuted_blocks=1 00:32:06.173 00:32:06.173 ' 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:06.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.173 --rc genhtml_branch_coverage=1 00:32:06.173 --rc genhtml_function_coverage=1 00:32:06.173 --rc genhtml_legend=1 00:32:06.173 --rc geninfo_all_blocks=1 00:32:06.173 --rc geninfo_unexecuted_blocks=1 00:32:06.173 00:32:06.173 ' 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:06.173 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:06.174 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:32:06.174 14:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:12.762 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:12.763 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:12.763 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:12.764 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:12.764 Found net devices under 0000:86:00.0: cvl_0_0 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:12.764 Found net devices under 0000:86:00.1: cvl_0_1 00:32:12.764 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:12.765 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:12.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:32:12.765 00:32:12.765 --- 10.0.0.2 ping statistics --- 00:32:12.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.768 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:32:12.768 00:32:12.768 --- 10.0.0.1 ping statistics --- 00:32:12.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.768 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=836755 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 836755 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 836755 ']' 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.768 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:12.769 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:12.769 [2024-12-05 14:03:54.616583] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:12.769 [2024-12-05 14:03:54.617561] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:32:12.769 [2024-12-05 14:03:54.617598] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.769 [2024-12-05 14:03:54.698269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:12.769 [2024-12-05 14:03:54.739252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.769 [2024-12-05 14:03:54.739288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.769 [2024-12-05 14:03:54.739295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:12.769 [2024-12-05 14:03:54.739301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:12.769 [2024-12-05 14:03:54.739306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.769 [2024-12-05 14:03:54.740647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:12.769 [2024-12-05 14:03:54.740679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.769 [2024-12-05 14:03:54.740679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:12.769 [2024-12-05 14:03:54.808450] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:12.769 [2024-12-05 14:03:54.809304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:12.769 [2024-12-05 14:03:54.809523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:12.769 [2024-12-05 14:03:54.809618] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:12.769 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.769 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:32:12.769 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:12.769 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:12.769 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:12.769 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.769 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:32:12.769 14:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:12.769 [2024-12-05 14:03:55.045623] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:12.769 14:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:12.769 14:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.028 [2024-12-05 14:03:55.441961] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.028 14:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:13.287 14:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:13.287 Malloc0 00:32:13.287 14:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:13.546 Delay0 00:32:13.546 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.805 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:14.064 NULL1 00:32:14.064 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:14.064 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=837019 00:32:14.064 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:14.064 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:14.064 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:14.323 14:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:14.582 14:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:32:14.582 14:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:14.841 true 00:32:14.841 14:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:14.841 14:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:14.841 14:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:15.100 14:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:32:15.100 14:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:32:15.359 true 00:32:15.359 14:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:15.359 14:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.737 Read completed with error (sct=0, sc=11) 00:32:16.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:16.737 14:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:16.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:16.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:16.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:16.737 14:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:32:16.737 14:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:16.737 true 00:32:16.996 14:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:16.996 14:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.933 14:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:17.933 14:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:32:17.933 14:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:18.192 true 00:32:18.192 14:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:18.192 14:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.192 14:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:18.451 14:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:32:18.451 14:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:18.710 true 00:32:18.710 14:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:18.710 14:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.646 14:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:19.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.905 14:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:32:19.905 14:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:20.164 true 00:32:20.164 14:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:20.164 14:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:21.100 14:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:21.100 14:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:21.100 14:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:21.360 true 00:32:21.360 14:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:21.360 14:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:21.618 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:21.877 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:21.877 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:21.877 true 00:32:21.877 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:21.877 14:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:23.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:23.253 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:23.253 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:32:23.253 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:23.253 true 00:32:23.513 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:23.513 14:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:23.513 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:23.771 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:32:23.771 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:24.030 true 00:32:24.030 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:24.030 14:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:24.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:24.962 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:24.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.220 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:32:25.220 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:25.478 true 00:32:25.478 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:25.478 14:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.410 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.410 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:32:26.410 14:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:26.668 true 00:32:26.668 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:26.668 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.927 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:27.185 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:32:27.185 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:27.185 true 00:32:27.185 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:27.185 14:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:28.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.558 14:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:28.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.558 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:32:28.558 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:28.816 true 00:32:28.816 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:28.816 14:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:29.750 14:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:29.750 14:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:32:29.750 14:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:30.008 true 00:32:30.008 14:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:30.008 14:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:30.266 14:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:30.524 14:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:30.524 14:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:30.782 true 00:32:30.783 14:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:30.783 14:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:31.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.718 14:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:31.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.976 14:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:31.976 14:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:32.235 true 00:32:32.235 14:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:32.235 14:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:33.169 14:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:33.169 14:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:33.169 14:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:33.428 true 00:32:33.428 14:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:33.428 14:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:33.687 14:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:33.946 14:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:33.946 14:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:33.946 true 00:32:33.946 14:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:33.946 14:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.323 14:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:35.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.323 14:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:35.323 14:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:35.582 true 00:32:35.582 14:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:35.582 14:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:36.519 14:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:36.519 14:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:36.519 14:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:36.777 true 00:32:36.777 14:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:36.777 14:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:36.777 14:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:37.035 14:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:37.035 14:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:37.293 true 00:32:37.293 14:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:37.293 14:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:38.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.230 14:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:38.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.489 14:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:38.489 14:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:38.747 true 00:32:38.747 14:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:38.747 14:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:39.688 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:39.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:39.688 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:39.688 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:32:39.961 true 00:32:39.961 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:39.961 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.219 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:40.478 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:32:40.478 14:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:32:40.478 true 00:32:40.478 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:40.478 14:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:41.855 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:41.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:41.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:41.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:41.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:41.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:41.855 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:32:41.855 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:32:42.114 true 00:32:42.114 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:42.114 14:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:43.048 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:43.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:43.048 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:32:43.048 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:32:43.306 true 00:32:43.306 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:43.306 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.610 14:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:43.925 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:32:43.925 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:32:43.925 true 00:32:43.925 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:43.925 14:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.312 Initializing NVMe Controllers 00:32:45.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:45.312 Controller IO queue size 128, less than required. 00:32:45.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:45.312 Controller IO queue size 128, less than required. 00:32:45.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:45.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:45.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:45.312 Initialization complete. Launching workers. 00:32:45.312 ======================================================== 00:32:45.312 Latency(us) 00:32:45.312 Device Information : IOPS MiB/s Average min max 00:32:45.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1974.00 0.96 42232.69 1694.94 1133406.88 00:32:45.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16920.43 8.26 7545.97 1561.21 369495.63 00:32:45.312 ======================================================== 00:32:45.312 Total : 18894.43 9.23 11169.87 1561.21 1133406.88 00:32:45.312 00:32:45.312 14:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:45.312 14:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:32:45.312 14:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:32:45.312 true 00:32:45.570 14:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 837019 00:32:45.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (837019) - No such process 00:32:45.570 14:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 837019 00:32:45.570 14:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.570 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:45.827 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:45.827 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:45.827 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:45.827 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:45.827 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:46.084 null0 00:32:46.084 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:46.084 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:46.084 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:46.084 null1 00:32:46.084 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:46.084 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:46.084 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:46.381 null2 00:32:46.381 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:46.381 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:46.381 14:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:46.639 null3 00:32:46.639 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:46.639 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:46.639 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:46.639 null4 00:32:46.639 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:46.639 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:46.639 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:46.897 null5 00:32:46.897 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:46.897 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:46.897 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:47.155 null6 00:32:47.155 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:47.155 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:47.155 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:47.155 null7 00:32:47.155 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:47.155 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:47.155 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:47.155 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 842553 842555 842558 842561 842565 842567 842570 842573 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:47.413 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.414 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.414 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:47.414 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:47.414 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:47.414 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:47.414 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:47.414 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:47.414 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:47.414 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:47.414 14:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:47.671 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:47.929 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:47.929 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:47.929 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:47.929 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:47.929 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:47.929 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:47.929 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:47.929 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:48.188 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:48.447 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:48.447 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:48.447 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:48.447 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:48.447 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:48.447 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:48.447 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:48.447 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.447 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.447 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:48.448 14:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:48.448 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.448 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.448 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:48.448 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.448 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.448 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:48.707 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:48.707 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:48.708 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:48.708 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:48.708 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:48.708 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:48.708 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:48.708 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.968 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:49.228 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:49.228 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:49.228 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:49.228 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:49.228 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:49.228 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:49.228 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.228 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.514 14:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:49.514 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:49.514 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:49.514 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.514 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:49.514 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:49.514 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:49.514 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:49.514 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:49.772 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.772 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.772 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.773 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:50.031 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:50.031 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:50.031 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:50.031 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:50.031 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:50.031 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:50.031 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:50.031 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:50.289 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.289 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.289 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:50.289 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.289 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:50.290 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:50.548 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:50.548 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:50.548 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:50.549 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:50.549 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:50.549 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:50.549 14:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.549 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:50.808 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:50.808 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:50.808 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:50.808 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:50.808 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:50.808 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:50.808 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:50.808 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.067 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.068 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:51.068 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.068 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.068 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:51.326 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:51.326 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:51.326 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:51.326 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:51.326 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:51.326 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:51.326 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:51.326 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:51.326 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.326 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.586 14:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:51.586 rmmod nvme_tcp 00:32:51.586 rmmod nvme_fabrics 00:32:51.586 rmmod nvme_keyring 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 836755 ']' 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 836755 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 836755 ']' 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 836755 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 836755 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 836755' 00:32:51.586 killing process with pid 836755 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 836755 00:32:51.586 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 836755 00:32:51.844 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:51.844 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:51.844 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:51.844 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:51.844 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:32:51.844 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:51.844 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:32:51.844 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:51.844 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:51.844 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.844 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.844 14:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.743 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:53.743 00:32:53.743 real 0m47.855s 00:32:53.743 user 2m59.586s 00:32:53.743 sys 0m19.647s 00:32:53.743 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:53.743 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:53.743 ************************************ 00:32:53.743 END TEST nvmf_ns_hotplug_stress 00:32:53.743 ************************************ 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:54.002 ************************************ 00:32:54.002 START TEST nvmf_delete_subsystem 00:32:54.002 ************************************ 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:54.002 * Looking for test storage... 00:32:54.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:54.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.002 --rc genhtml_branch_coverage=1 00:32:54.002 --rc genhtml_function_coverage=1 00:32:54.002 --rc genhtml_legend=1 00:32:54.002 --rc geninfo_all_blocks=1 00:32:54.002 --rc geninfo_unexecuted_blocks=1 00:32:54.002 00:32:54.002 ' 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:54.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.002 --rc genhtml_branch_coverage=1 00:32:54.002 --rc genhtml_function_coverage=1 00:32:54.002 --rc genhtml_legend=1 00:32:54.002 --rc geninfo_all_blocks=1 00:32:54.002 --rc geninfo_unexecuted_blocks=1 00:32:54.002 00:32:54.002 ' 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:54.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.002 --rc genhtml_branch_coverage=1 00:32:54.002 --rc genhtml_function_coverage=1 00:32:54.002 --rc genhtml_legend=1 00:32:54.002 --rc geninfo_all_blocks=1 00:32:54.002 --rc geninfo_unexecuted_blocks=1 00:32:54.002 00:32:54.002 ' 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:54.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.002 --rc genhtml_branch_coverage=1 00:32:54.002 --rc genhtml_function_coverage=1 00:32:54.002 --rc genhtml_legend=1 00:32:54.002 --rc geninfo_all_blocks=1 00:32:54.002 --rc geninfo_unexecuted_blocks=1 00:32:54.002 00:32:54.002 ' 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.002 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:54.003 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:54.003 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.003 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.003 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.003 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.003 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.260 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:54.260 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.260 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.260 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.260 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.260 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.260 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.260 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:54.261 14:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:00.827 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:00.827 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.827 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:00.828 Found net devices under 0000:86:00.0: cvl_0_0 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:00.828 Found net devices under 0000:86:00.1: cvl_0_1 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:00.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:00.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:33:00.828 00:33:00.828 --- 10.0.0.2 ping statistics --- 00:33:00.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.828 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:00.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:00.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:33:00.828 00:33:00.828 --- 10.0.0.1 ping statistics --- 00:33:00.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.828 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=846768 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 846768 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 846768 ']' 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:00.828 [2024-12-05 14:04:42.500190] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:00.828 [2024-12-05 14:04:42.501126] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:33:00.828 [2024-12-05 14:04:42.501160] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.828 [2024-12-05 14:04:42.580404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:00.828 [2024-12-05 14:04:42.622983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.828 [2024-12-05 14:04:42.623019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.828 [2024-12-05 14:04:42.623026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.828 [2024-12-05 14:04:42.623032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.828 [2024-12-05 14:04:42.623037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.828 [2024-12-05 14:04:42.624240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.828 [2024-12-05 14:04:42.624240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.828 [2024-12-05 14:04:42.693591] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:00.828 [2024-12-05 14:04:42.694171] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:00.828 [2024-12-05 14:04:42.694298] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:00.828 [2024-12-05 14:04:42.773055] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:00.828 [2024-12-05 14:04:42.801387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:00.828 NULL1 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:00.828 Delay0 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=846986 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:33:00.828 14:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:00.828 [2024-12-05 14:04:42.915287] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:02.737 14:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:02.737 14:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.737 14:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.737 Read completed with error (sct=0, sc=8) 00:33:02.737 Read completed with error (sct=0, sc=8) 00:33:02.737 Write completed with error (sct=0, sc=8) 00:33:02.737 Read completed with error (sct=0, sc=8) 00:33:02.737 starting I/O failed: -6 00:33:02.737 Read completed with error (sct=0, sc=8) 00:33:02.737 Read completed with error (sct=0, sc=8) 00:33:02.737 Write completed with error (sct=0, sc=8) 00:33:02.737 Write completed with error (sct=0, sc=8) 00:33:02.737 starting I/O failed: -6 00:33:02.737 Read completed with error (sct=0, sc=8) 00:33:02.737 Read completed with error (sct=0, sc=8) 00:33:02.737 Read completed with error (sct=0, sc=8) 00:33:02.737 Write completed with error (sct=0, sc=8) 00:33:02.737 starting I/O failed: -6 00:33:02.737 Write completed with error (sct=0, sc=8) 00:33:02.737 Write completed with error (sct=0, sc=8) 00:33:02.737 Read completed with error (sct=0, sc=8) 00:33:02.737 Write completed with error (sct=0, sc=8) 00:33:02.737 starting I/O failed: -6 00:33:02.737 Read completed with error (sct=0, sc=8) 00:33:02.737 Write completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 starting I/O failed: -6 00:33:02.738 Write completed with error (sct=0, sc=8) 00:33:02.738 Write completed with error (sct=0, sc=8) 00:33:02.738 Write completed with error (sct=0, sc=8) 00:33:02.738 Write completed with error (sct=0, sc=8) 00:33:02.738 starting I/O failed: -6 00:33:02.738 Write completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 Write completed with error (sct=0, sc=8) 00:33:02.738 starting I/O failed: -6 00:33:02.738 Write completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 starting I/O failed: -6 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 Write completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 starting I/O failed: -6 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 Write completed with error (sct=0, sc=8) 00:33:02.738 [2024-12-05 14:04:45.042320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192e2c0 is same with the state(6) to be set 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 Write completed with error (sct=0, sc=8) 00:33:02.738 Write completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 Write completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.738 Write completed with error (sct=0, sc=8) 00:33:02.738 Read completed with error (sct=0, sc=8) 00:33:02.739 Write completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Write completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Write completed with error (sct=0, sc=8) 00:33:02.739 Write completed with error (sct=0, sc=8) 00:33:02.739 Write completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Write completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Write completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Write completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Write completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Write completed with error (sct=0, sc=8) 00:33:02.739 Write completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Write completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Read completed with error (sct=0, sc=8) 00:33:02.739 Write completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 starting I/O failed: -6 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Write completed with error (sct=0, sc=8) 00:33:02.740 starting I/O failed: -6 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.740 starting I/O failed: -6 00:33:02.740 Read completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Write completed with error (sct=0, sc=8) 00:33:02.741 starting I/O failed: -6 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Write completed with error (sct=0, sc=8) 00:33:02.741 starting I/O failed: -6 00:33:02.741 Write completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 starting I/O failed: -6 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 starting I/O failed: -6 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Write completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 starting I/O failed: -6 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Write completed with error (sct=0, sc=8) 00:33:02.741 starting I/O failed: -6 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Write completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 starting I/O failed: -6 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Write completed with error (sct=0, sc=8) 00:33:02.741 Read completed with error (sct=0, sc=8) 00:33:02.741 Write completed with error (sct=0, sc=8) 00:33:02.741 starting I/O failed: -6 00:33:02.741 [2024-12-05 14:04:45.043156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1a1800d4b0 is same with the state(6) to be set 00:33:03.676 [2024-12-05 14:04:46.011536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192f9b0 is same with the state(6) to be set 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 [2024-12-05 14:04:46.043737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1a1800d7e0 is same with the state(6) to be set 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 [2024-12-05 14:04:46.043882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1a1800d020 is same with the state(6) to be set 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 [2024-12-05 14:04:46.044003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1a18000c40 is same with the state(6) to be set 00:33:03.676 Read completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.676 Write completed with error (sct=0, sc=8) 00:33:03.677 Write completed with error (sct=0, sc=8) 00:33:03.677 Read completed with error (sct=0, sc=8) 00:33:03.677 Read completed with error (sct=0, sc=8) 00:33:03.677 Read completed with error (sct=0, sc=8) 00:33:03.677 Read completed with error (sct=0, sc=8) 00:33:03.677 Read completed with error (sct=0, sc=8) 00:33:03.677 Read completed with error (sct=0, sc=8) 00:33:03.677 Write completed with error (sct=0, sc=8) 00:33:03.677 Read completed with error (sct=0, sc=8) 00:33:03.677 Write completed with error (sct=0, sc=8) 00:33:03.677 Read completed with error (sct=0, sc=8) 00:33:03.677 Read completed with error (sct=0, sc=8) 00:33:03.677 [2024-12-05 14:04:46.046118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192e680 is same with the state(6) to be set 00:33:03.677 Initializing NVMe Controllers 00:33:03.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:03.677 Controller IO queue size 128, less than required. 00:33:03.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:03.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:03.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:03.677 Initialization complete. Launching workers. 00:33:03.677 ======================================================== 00:33:03.677 Latency(us) 00:33:03.677 Device Information : IOPS MiB/s Average min max 00:33:03.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 152.09 0.07 891468.28 235.11 1043825.88 00:33:03.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.01 0.08 1045748.24 362.20 1999253.29 00:33:03.677 ======================================================== 00:33:03.677 Total : 317.10 0.15 971751.90 235.11 1999253.29 00:33:03.677 00:33:03.677 [2024-12-05 14:04:46.046497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192f9b0 (9): Bad file descriptor 00:33:03.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:03.677 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.677 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:33:03.677 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 846986 00:33:03.677 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 846986 00:33:04.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (846986) - No such process 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 846986 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 846986 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 846986 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:04.242 [2024-12-05 14:04:46.573220] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=847461 00:33:04.242 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:33:04.243 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:04.243 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 847461 00:33:04.243 14:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:04.243 [2024-12-05 14:04:46.656804] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:04.810 14:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:04.810 14:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 847461 00:33:04.810 14:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:05.069 14:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:05.069 14:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 847461 00:33:05.069 14:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:05.637 14:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:05.637 14:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 847461 00:33:05.637 14:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:06.202 14:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:06.202 14:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 847461 00:33:06.202 14:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:06.768 14:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:06.768 14:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 847461 00:33:06.768 14:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:07.081 14:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:07.081 14:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 847461 00:33:07.081 14:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:07.340 Initializing NVMe Controllers 00:33:07.340 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:07.340 Controller IO queue size 128, less than required. 00:33:07.340 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:07.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:07.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:07.340 Initialization complete. Launching workers. 00:33:07.340 ======================================================== 00:33:07.340 Latency(us) 00:33:07.340 Device Information : IOPS MiB/s Average min max 00:33:07.340 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002865.92 1000210.51 1042194.28 00:33:07.340 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004655.02 1000437.90 1042136.76 00:33:07.340 ======================================================== 00:33:07.340 Total : 256.00 0.12 1003760.47 1000210.51 1042194.28 00:33:07.340 00:33:07.599 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:07.599 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 847461 00:33:07.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (847461) - No such process 00:33:07.599 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 847461 00:33:07.599 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:07.599 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:07.599 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:07.599 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:33:07.599 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.599 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:33:07.599 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.599 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.599 rmmod nvme_tcp 00:33:07.599 rmmod nvme_fabrics 00:33:07.599 rmmod nvme_keyring 00:33:07.599 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 846768 ']' 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 846768 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 846768 ']' 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 846768 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 846768 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 846768' 00:33:07.858 killing process with pid 846768 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 846768 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 846768 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.858 14:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:10.392 00:33:10.392 real 0m16.087s 00:33:10.392 user 0m26.189s 00:33:10.392 sys 0m6.039s 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:10.392 ************************************ 00:33:10.392 END TEST nvmf_delete_subsystem 00:33:10.392 ************************************ 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:10.392 ************************************ 00:33:10.392 START TEST nvmf_host_management 00:33:10.392 ************************************ 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:10.392 * Looking for test storage... 00:33:10.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:10.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.392 --rc genhtml_branch_coverage=1 00:33:10.392 --rc genhtml_function_coverage=1 00:33:10.392 --rc genhtml_legend=1 00:33:10.392 --rc geninfo_all_blocks=1 00:33:10.392 --rc geninfo_unexecuted_blocks=1 00:33:10.392 00:33:10.392 ' 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:10.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.392 --rc genhtml_branch_coverage=1 00:33:10.392 --rc genhtml_function_coverage=1 00:33:10.392 --rc genhtml_legend=1 00:33:10.392 --rc geninfo_all_blocks=1 00:33:10.392 --rc geninfo_unexecuted_blocks=1 00:33:10.392 00:33:10.392 ' 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:10.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.392 --rc genhtml_branch_coverage=1 00:33:10.392 --rc genhtml_function_coverage=1 00:33:10.392 --rc genhtml_legend=1 00:33:10.392 --rc geninfo_all_blocks=1 00:33:10.392 --rc geninfo_unexecuted_blocks=1 00:33:10.392 00:33:10.392 ' 00:33:10.392 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:10.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.392 --rc genhtml_branch_coverage=1 00:33:10.392 --rc genhtml_function_coverage=1 00:33:10.392 --rc genhtml_legend=1 00:33:10.392 --rc geninfo_all_blocks=1 00:33:10.392 --rc geninfo_unexecuted_blocks=1 00:33:10.392 00:33:10.393 ' 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:33:10.393 14:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:16.960 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:16.961 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:16.961 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:16.961 Found net devices under 0000:86:00.0: cvl_0_0 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:16.961 Found net devices under 0000:86:00.1: cvl_0_1 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:16.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:33:16.961 00:33:16.961 --- 10.0.0.2 ping statistics --- 00:33:16.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.961 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:33:16.961 00:33:16.961 --- 10.0.0.1 ping statistics --- 00:33:16.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.961 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=851594 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 851594 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 851594 ']' 00:33:16.961 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:16.962 [2024-12-05 14:04:58.679984] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:16.962 [2024-12-05 14:04:58.680876] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:33:16.962 [2024-12-05 14:04:58.680908] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.962 [2024-12-05 14:04:58.757484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:16.962 [2024-12-05 14:04:58.798034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.962 [2024-12-05 14:04:58.798072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.962 [2024-12-05 14:04:58.798080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.962 [2024-12-05 14:04:58.798089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.962 [2024-12-05 14:04:58.798093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.962 [2024-12-05 14:04:58.799730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.962 [2024-12-05 14:04:58.799845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:16.962 [2024-12-05 14:04:58.799933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:16.962 [2024-12-05 14:04:58.799932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.962 [2024-12-05 14:04:58.868294] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:16.962 [2024-12-05 14:04:58.868725] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:16.962 [2024-12-05 14:04:58.869157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:16.962 [2024-12-05 14:04:58.869324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:16.962 [2024-12-05 14:04:58.869387] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:16.962 [2024-12-05 14:04:58.944837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.962 14:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:16.962 Malloc0 00:33:16.962 [2024-12-05 14:04:59.033104] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=851712 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 851712 /var/tmp/bdevperf.sock 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 851712 ']' 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:16.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:16.962 { 00:33:16.962 "params": { 00:33:16.962 "name": "Nvme$subsystem", 00:33:16.962 "trtype": "$TEST_TRANSPORT", 00:33:16.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:16.962 "adrfam": "ipv4", 00:33:16.962 "trsvcid": "$NVMF_PORT", 00:33:16.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:16.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:16.962 "hdgst": ${hdgst:-false}, 00:33:16.962 "ddgst": ${ddgst:-false} 00:33:16.962 }, 00:33:16.962 "method": "bdev_nvme_attach_controller" 00:33:16.962 } 00:33:16.962 EOF 00:33:16.962 )") 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:16.962 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:16.962 "params": { 00:33:16.962 "name": "Nvme0", 00:33:16.962 "trtype": "tcp", 00:33:16.962 "traddr": "10.0.0.2", 00:33:16.962 "adrfam": "ipv4", 00:33:16.962 "trsvcid": "4420", 00:33:16.962 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:16.962 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:16.962 "hdgst": false, 00:33:16.962 "ddgst": false 00:33:16.962 }, 00:33:16.962 "method": "bdev_nvme_attach_controller" 00:33:16.962 }' 00:33:16.962 [2024-12-05 14:04:59.133373] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:33:16.962 [2024-12-05 14:04:59.133424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851712 ] 00:33:16.962 [2024-12-05 14:04:59.211785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.962 [2024-12-05 14:04:59.252682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.962 Running I/O for 10 seconds... 00:33:17.530 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.530 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:17.530 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:17.530 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.530 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:17.530 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.530 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:17.530 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:33:17.530 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:17.530 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:33:17.530 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:33:17.530 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:33:17.530 14:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1189 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1189 -ge 100 ']' 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.530 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:17.530 [2024-12-05 14:05:00.044972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1930 is same with the state(6) to be set 00:33:17.530 [2024-12-05 14:05:00.044992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.531 [2024-12-05 14:05:00.045014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1930 is same with the state(6) to be set 00:33:17.531 [2024-12-05 14:05:00.045028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.045032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1930 is same with the state(6) to be set 00:33:17.531 [2024-12-05 14:05:00.045041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1930 is same with the state(6) to be set 00:33:17.531 [2024-12-05 14:05:00.045041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.531 [2024-12-05 14:05:00.045050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1930 is same with the state(6) to be set 00:33:17.531 [2024-12-05 14:05:00.045053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.045059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1930 is same with the state(6) to be set 00:33:17.531 [2024-12-05 14:05:00.045064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.531 [2024-12-05 14:05:00.045067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1930 is same with the state(6) to be set 00:33:17.531 [2024-12-05 14:05:00.045073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.045075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1930 is same with the state(6) to be set 00:33:17.531 [2024-12-05 14:05:00.045083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-05 14:05:00.045084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1930 is same with id:0 cdw10:00000000 cdw11:00000000 00:33:17.531 the state(6) to be set 00:33:17.531 [2024-12-05 14:05:00.045094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1930 is same with [2024-12-05 14:05:00.045094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:33:17.531 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.045106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1930 is same with the state(6) to be set 00:33:17.531 [2024-12-05 14:05:00.045107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182d510 is same with the state(6) to be set 00:33:17.531 [2024-12-05 14:05:00.045114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1930 is same with the state(6) to be set 00:33:17.531 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.531 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:17.531 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.531 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:17.531 [2024-12-05 14:05:00.054193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.531 [2024-12-05 14:05:00.054636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.531 [2024-12-05 14:05:00.054644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.054988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.054997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.055003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.055011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.055019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.055028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.055034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.055042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.055049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.055057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.055064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.055072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.055078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.055086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.055093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.055101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.055108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.055116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.055122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.055131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.055137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.055145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.055153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.055161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.055168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.055177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.532 [2024-12-05 14:05:00.055183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.532 [2024-12-05 14:05:00.055265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182d510 (9): Bad file descriptor 00:33:17.532 [2024-12-05 14:05:00.056131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:17.532 task offset: 32768 on job bdev=Nvme0n1 fails 00:33:17.532 00:33:17.532 Latency(us) 00:33:17.532 [2024-12-05T13:05:00.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.532 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:17.532 Job: Nvme0n1 ended in about 0.65 seconds with error 00:33:17.532 Verification LBA range: start 0x0 length 0x400 00:33:17.532 Nvme0n1 : 0.65 1979.52 123.72 98.98 0.00 30192.50 1419.95 26464.06 00:33:17.532 [2024-12-05T13:05:00.120Z] =================================================================================================================== 00:33:17.533 [2024-12-05T13:05:00.120Z] Total : 1979.52 123.72 98.98 0.00 30192.50 1419.95 26464.06 00:33:17.533 [2024-12-05 14:05:00.058520] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:17.533 [2024-12-05 14:05:00.061172] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:33:17.533 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.533 14:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:33:18.909 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 851712 00:33:18.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (851712) - No such process 00:33:18.909 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:33:18.909 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:33:18.909 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:18.909 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:33:18.909 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:18.909 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:18.909 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:18.909 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:18.909 { 00:33:18.909 "params": { 00:33:18.909 "name": "Nvme$subsystem", 00:33:18.909 "trtype": "$TEST_TRANSPORT", 00:33:18.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.909 "adrfam": "ipv4", 00:33:18.909 "trsvcid": "$NVMF_PORT", 00:33:18.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.909 "hdgst": ${hdgst:-false}, 00:33:18.909 "ddgst": ${ddgst:-false} 00:33:18.909 }, 00:33:18.909 "method": "bdev_nvme_attach_controller" 00:33:18.909 } 00:33:18.909 EOF 00:33:18.909 )") 00:33:18.909 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:18.909 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:18.909 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:18.909 14:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:18.909 "params": { 00:33:18.909 "name": "Nvme0", 00:33:18.909 "trtype": "tcp", 00:33:18.909 "traddr": "10.0.0.2", 00:33:18.909 "adrfam": "ipv4", 00:33:18.909 "trsvcid": "4420", 00:33:18.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:18.909 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:18.909 "hdgst": false, 00:33:18.909 "ddgst": false 00:33:18.909 }, 00:33:18.909 "method": "bdev_nvme_attach_controller" 00:33:18.909 }' 00:33:18.909 [2024-12-05 14:05:01.116854] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:33:18.909 [2024-12-05 14:05:01.116899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851961 ] 00:33:18.909 [2024-12-05 14:05:01.193460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.909 [2024-12-05 14:05:01.232437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.167 Running I/O for 1 seconds... 00:33:20.100 1920.00 IOPS, 120.00 MiB/s 00:33:20.100 Latency(us) 00:33:20.100 [2024-12-05T13:05:02.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.100 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:20.100 Verification LBA range: start 0x0 length 0x400 00:33:20.100 Nvme0n1 : 1.01 1968.08 123.01 0.00 0.00 31991.37 5367.71 26588.89 00:33:20.100 [2024-12-05T13:05:02.687Z] =================================================================================================================== 00:33:20.100 [2024-12-05T13:05:02.687Z] Total : 1968.08 123.01 0.00 0.00 31991.37 5367.71 26588.89 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.359 rmmod nvme_tcp 00:33:20.359 rmmod nvme_fabrics 00:33:20.359 rmmod nvme_keyring 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 851594 ']' 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 851594 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 851594 ']' 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 851594 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 851594 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 851594' 00:33:20.359 killing process with pid 851594 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 851594 00:33:20.359 14:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 851594 00:33:20.619 [2024-12-05 14:05:03.009731] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:33:20.619 14:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.619 14:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.619 14:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.619 14:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:33:20.620 14:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:33:20.620 14:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.620 14:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.620 14:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.620 14:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.620 14:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.620 14:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.620 14:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.526 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.526 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:22.526 00:33:22.526 real 0m12.561s 00:33:22.526 user 0m19.074s 00:33:22.526 sys 0m6.500s 00:33:22.526 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.526 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:22.526 ************************************ 00:33:22.526 END TEST nvmf_host_management 00:33:22.526 ************************************ 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:22.785 ************************************ 00:33:22.785 START TEST nvmf_lvol 00:33:22.785 ************************************ 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:22.785 * Looking for test storage... 00:33:22.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.785 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:22.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.786 --rc genhtml_branch_coverage=1 00:33:22.786 --rc genhtml_function_coverage=1 00:33:22.786 --rc genhtml_legend=1 00:33:22.786 --rc geninfo_all_blocks=1 00:33:22.786 --rc geninfo_unexecuted_blocks=1 00:33:22.786 00:33:22.786 ' 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:22.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.786 --rc genhtml_branch_coverage=1 00:33:22.786 --rc genhtml_function_coverage=1 00:33:22.786 --rc genhtml_legend=1 00:33:22.786 --rc geninfo_all_blocks=1 00:33:22.786 --rc geninfo_unexecuted_blocks=1 00:33:22.786 00:33:22.786 ' 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:22.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.786 --rc genhtml_branch_coverage=1 00:33:22.786 --rc genhtml_function_coverage=1 00:33:22.786 --rc genhtml_legend=1 00:33:22.786 --rc geninfo_all_blocks=1 00:33:22.786 --rc geninfo_unexecuted_blocks=1 00:33:22.786 00:33:22.786 ' 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:22.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.786 --rc genhtml_branch_coverage=1 00:33:22.786 --rc genhtml_function_coverage=1 00:33:22.786 --rc genhtml_legend=1 00:33:22.786 --rc geninfo_all_blocks=1 00:33:22.786 --rc geninfo_unexecuted_blocks=1 00:33:22.786 00:33:22.786 ' 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.786 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:33:23.045 14:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:29.613 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:29.613 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:29.613 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:29.614 Found net devices under 0000:86:00.0: cvl_0_0 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:29.614 Found net devices under 0000:86:00.1: cvl_0_1 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:29.614 14:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:29.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:29.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:33:29.614 00:33:29.614 --- 10.0.0.2 ping statistics --- 00:33:29.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.614 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:29.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:29.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:33:29.614 00:33:29.614 --- 10.0.0.1 ping statistics --- 00:33:29.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.614 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=855717 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 855717 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 855717 ']' 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:29.614 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:29.614 [2024-12-05 14:05:11.332773] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:29.614 [2024-12-05 14:05:11.333646] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:33:29.614 [2024-12-05 14:05:11.333677] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:29.614 [2024-12-05 14:05:11.397776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:29.614 [2024-12-05 14:05:11.441028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:29.614 [2024-12-05 14:05:11.441065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:29.615 [2024-12-05 14:05:11.441073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:29.615 [2024-12-05 14:05:11.441080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:29.615 [2024-12-05 14:05:11.441085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:29.615 [2024-12-05 14:05:11.445386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.615 [2024-12-05 14:05:11.445424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.615 [2024-12-05 14:05:11.445425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:29.615 [2024-12-05 14:05:11.513862] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:29.615 [2024-12-05 14:05:11.513967] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:29.615 [2024-12-05 14:05:11.514471] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:29.615 [2024-12-05 14:05:11.514626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:29.615 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.615 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:33:29.615 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:29.615 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:29.615 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:29.615 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:29.615 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:29.615 [2024-12-05 14:05:11.754245] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.615 14:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:29.615 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:29.615 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:29.872 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:29.872 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:29.872 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:30.130 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=61da889d-7a68-4739-8c3d-04dd423ba069 00:33:30.130 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 61da889d-7a68-4739-8c3d-04dd423ba069 lvol 20 00:33:30.388 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2dfa1413-3edb-43a5-83a4-d4a16763d9fd 00:33:30.388 14:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:30.646 14:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2dfa1413-3edb-43a5-83a4-d4a16763d9fd 00:33:30.646 14:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:30.904 [2024-12-05 14:05:13.366116] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.904 14:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:31.162 14:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:31.162 14:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=856201 00:33:31.162 14:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:32.094 14:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2dfa1413-3edb-43a5-83a4-d4a16763d9fd MY_SNAPSHOT 00:33:32.352 14:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=949f29f4-39e6-42a3-a408-d70e1ceac022 00:33:32.352 14:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2dfa1413-3edb-43a5-83a4-d4a16763d9fd 30 00:33:32.610 14:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 949f29f4-39e6-42a3-a408-d70e1ceac022 MY_CLONE 00:33:32.867 14:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0c88807b-2884-41eb-8f2b-2e6ac272d568 00:33:32.867 14:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0c88807b-2884-41eb-8f2b-2e6ac272d568 00:33:33.433 14:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 856201 00:33:41.662 Initializing NVMe Controllers 00:33:41.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:41.662 Controller IO queue size 128, less than required. 00:33:41.662 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:41.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:41.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:41.662 Initialization complete. Launching workers. 00:33:41.662 ======================================================== 00:33:41.662 Latency(us) 00:33:41.662 Device Information : IOPS MiB/s Average min max 00:33:41.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12769.27 49.88 10026.13 1508.11 92215.58 00:33:41.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12594.87 49.20 10166.30 3812.49 47897.71 00:33:41.662 ======================================================== 00:33:41.662 Total : 25364.14 99.08 10095.74 1508.11 92215.58 00:33:41.662 00:33:41.662 14:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:41.662 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2dfa1413-3edb-43a5-83a4-d4a16763d9fd 00:33:41.920 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61da889d-7a68-4739-8c3d-04dd423ba069 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:42.179 rmmod nvme_tcp 00:33:42.179 rmmod nvme_fabrics 00:33:42.179 rmmod nvme_keyring 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 855717 ']' 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 855717 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 855717 ']' 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 855717 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 855717 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 855717' 00:33:42.179 killing process with pid 855717 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 855717 00:33:42.179 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 855717 00:33:42.438 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:42.438 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:42.438 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:42.438 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:33:42.438 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:33:42.438 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:42.438 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:33:42.438 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:42.438 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:42.438 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.438 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:42.438 14:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.342 14:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:44.342 00:33:44.342 real 0m21.730s 00:33:44.342 user 0m55.445s 00:33:44.342 sys 0m9.699s 00:33:44.342 14:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:44.342 14:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:44.342 ************************************ 00:33:44.342 END TEST nvmf_lvol 00:33:44.342 ************************************ 00:33:44.600 14:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:44.600 14:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:44.600 14:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:44.600 14:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:44.600 ************************************ 00:33:44.600 START TEST nvmf_lvs_grow 00:33:44.600 ************************************ 00:33:44.600 14:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:44.600 * Looking for test storage... 00:33:44.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:44.600 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:44.600 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:33:44.600 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:44.600 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:44.600 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:44.600 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:44.600 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:44.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.601 --rc genhtml_branch_coverage=1 00:33:44.601 --rc genhtml_function_coverage=1 00:33:44.601 --rc genhtml_legend=1 00:33:44.601 --rc geninfo_all_blocks=1 00:33:44.601 --rc geninfo_unexecuted_blocks=1 00:33:44.601 00:33:44.601 ' 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:44.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.601 --rc genhtml_branch_coverage=1 00:33:44.601 --rc genhtml_function_coverage=1 00:33:44.601 --rc genhtml_legend=1 00:33:44.601 --rc geninfo_all_blocks=1 00:33:44.601 --rc geninfo_unexecuted_blocks=1 00:33:44.601 00:33:44.601 ' 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:44.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.601 --rc genhtml_branch_coverage=1 00:33:44.601 --rc genhtml_function_coverage=1 00:33:44.601 --rc genhtml_legend=1 00:33:44.601 --rc geninfo_all_blocks=1 00:33:44.601 --rc geninfo_unexecuted_blocks=1 00:33:44.601 00:33:44.601 ' 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:44.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.601 --rc genhtml_branch_coverage=1 00:33:44.601 --rc genhtml_function_coverage=1 00:33:44.601 --rc genhtml_legend=1 00:33:44.601 --rc geninfo_all_blocks=1 00:33:44.601 --rc geninfo_unexecuted_blocks=1 00:33:44.601 00:33:44.601 ' 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:44.601 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:33:44.860 14:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.428 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:51.429 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:51.429 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:51.429 Found net devices under 0000:86:00.0: cvl_0_0 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:51.429 Found net devices under 0000:86:00.1: cvl_0_1 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:51.429 14:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:51.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:51.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:33:51.429 00:33:51.429 --- 10.0.0.2 ping statistics --- 00:33:51.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.429 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:51.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:51.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:33:51.429 00:33:51.429 --- 10.0.0.1 ping statistics --- 00:33:51.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.429 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=861386 00:33:51.429 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 861386 00:33:51.430 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:51.430 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 861386 ']' 00:33:51.430 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.430 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:51.430 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.430 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:51.430 14:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:51.430 [2024-12-05 14:05:33.171523] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:51.430 [2024-12-05 14:05:33.172430] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:33:51.430 [2024-12-05 14:05:33.172461] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:51.430 [2024-12-05 14:05:33.261519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.430 [2024-12-05 14:05:33.301962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:51.430 [2024-12-05 14:05:33.302003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:51.430 [2024-12-05 14:05:33.302010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:51.430 [2024-12-05 14:05:33.302016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:51.430 [2024-12-05 14:05:33.302021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:51.430 [2024-12-05 14:05:33.302583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.430 [2024-12-05 14:05:33.370725] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:51.430 [2024-12-05 14:05:33.370929] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:51.688 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:51.688 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:33:51.688 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:51.688 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:51.688 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:51.688 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:51.688 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:51.688 [2024-12-05 14:05:34.227283] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:51.688 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:51.688 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:51.688 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.688 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:51.945 ************************************ 00:33:51.945 START TEST lvs_grow_clean 00:33:51.945 ************************************ 00:33:51.945 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:33:51.945 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:51.945 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:51.945 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:51.945 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:51.945 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:51.945 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:51.945 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:51.945 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:51.945 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:51.945 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:51.945 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:52.203 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=13e889d4-d2dd-496e-85a7-daa9effb3e78 00:33:52.203 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:52.203 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e889d4-d2dd-496e-85a7-daa9effb3e78 00:33:52.460 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:52.460 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:52.460 14:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 13e889d4-d2dd-496e-85a7-daa9effb3e78 lvol 150 00:33:52.718 14:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=13de37c7-0b35-4208-9eb1-7fbc0a4a856a 00:33:52.718 14:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:52.718 14:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:52.718 [2024-12-05 14:05:35.274985] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:52.718 [2024-12-05 14:05:35.275117] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:52.718 true 00:33:52.718 14:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e889d4-d2dd-496e-85a7-daa9effb3e78 00:33:52.718 14:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:52.975 14:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:52.975 14:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:53.233 14:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 13de37c7-0b35-4208-9eb1-7fbc0a4a856a 00:33:53.491 14:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:53.491 [2024-12-05 14:05:36.031510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.491 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:53.748 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=862014 00:33:53.748 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:53.748 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:53.748 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 862014 /var/tmp/bdevperf.sock 00:33:53.749 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 862014 ']' 00:33:53.749 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:53.749 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:53.749 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:53.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:53.749 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:53.749 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:53.749 [2024-12-05 14:05:36.292034] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:33:53.749 [2024-12-05 14:05:36.292085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862014 ] 00:33:54.007 [2024-12-05 14:05:36.366683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.007 [2024-12-05 14:05:36.410667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.007 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:54.007 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:33:54.007 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:54.266 Nvme0n1 00:33:54.266 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:54.525 [ 00:33:54.525 { 00:33:54.525 "name": "Nvme0n1", 00:33:54.525 "aliases": [ 00:33:54.525 "13de37c7-0b35-4208-9eb1-7fbc0a4a856a" 00:33:54.525 ], 00:33:54.525 "product_name": "NVMe disk", 00:33:54.525 "block_size": 4096, 00:33:54.525 "num_blocks": 38912, 00:33:54.525 "uuid": "13de37c7-0b35-4208-9eb1-7fbc0a4a856a", 00:33:54.525 "numa_id": 1, 00:33:54.525 "assigned_rate_limits": { 00:33:54.525 "rw_ios_per_sec": 0, 00:33:54.525 "rw_mbytes_per_sec": 0, 00:33:54.525 "r_mbytes_per_sec": 0, 00:33:54.525 "w_mbytes_per_sec": 0 00:33:54.525 }, 00:33:54.525 "claimed": false, 00:33:54.525 "zoned": false, 00:33:54.525 "supported_io_types": { 00:33:54.525 "read": true, 00:33:54.525 "write": true, 00:33:54.525 "unmap": true, 00:33:54.525 "flush": true, 00:33:54.525 "reset": true, 00:33:54.525 "nvme_admin": true, 00:33:54.525 "nvme_io": true, 00:33:54.525 "nvme_io_md": false, 00:33:54.525 "write_zeroes": true, 00:33:54.525 "zcopy": false, 00:33:54.525 "get_zone_info": false, 00:33:54.525 "zone_management": false, 00:33:54.526 "zone_append": false, 00:33:54.526 "compare": true, 00:33:54.526 "compare_and_write": true, 00:33:54.526 "abort": true, 00:33:54.526 "seek_hole": false, 00:33:54.526 "seek_data": false, 00:33:54.526 "copy": true, 00:33:54.526 "nvme_iov_md": false 00:33:54.526 }, 00:33:54.526 "memory_domains": [ 00:33:54.526 { 00:33:54.526 "dma_device_id": "system", 00:33:54.526 "dma_device_type": 1 00:33:54.526 } 00:33:54.526 ], 00:33:54.526 "driver_specific": { 00:33:54.526 "nvme": [ 00:33:54.526 { 00:33:54.526 "trid": { 00:33:54.526 "trtype": "TCP", 00:33:54.526 "adrfam": "IPv4", 00:33:54.526 "traddr": "10.0.0.2", 00:33:54.526 "trsvcid": "4420", 00:33:54.526 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:54.526 }, 00:33:54.526 "ctrlr_data": { 00:33:54.526 "cntlid": 1, 00:33:54.526 "vendor_id": "0x8086", 00:33:54.526 "model_number": "SPDK bdev Controller", 00:33:54.526 "serial_number": "SPDK0", 00:33:54.526 "firmware_revision": "25.01", 00:33:54.526 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:54.526 "oacs": { 00:33:54.526 "security": 0, 00:33:54.526 "format": 0, 00:33:54.526 "firmware": 0, 00:33:54.526 "ns_manage": 0 00:33:54.526 }, 00:33:54.526 "multi_ctrlr": true, 00:33:54.526 "ana_reporting": false 00:33:54.526 }, 00:33:54.526 "vs": { 00:33:54.526 "nvme_version": "1.3" 00:33:54.526 }, 00:33:54.526 "ns_data": { 00:33:54.526 "id": 1, 00:33:54.526 "can_share": true 00:33:54.526 } 00:33:54.526 } 00:33:54.526 ], 00:33:54.526 "mp_policy": "active_passive" 00:33:54.526 } 00:33:54.526 } 00:33:54.526 ] 00:33:54.526 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=862070 00:33:54.526 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:54.526 14:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:54.526 Running I/O for 10 seconds... 00:33:55.902 Latency(us) 00:33:55.902 [2024-12-05T13:05:38.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:55.902 Nvme0n1 : 1.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:33:55.902 [2024-12-05T13:05:38.489Z] =================================================================================================================== 00:33:55.902 [2024-12-05T13:05:38.489Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:33:55.902 00:33:56.470 14:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 13e889d4-d2dd-496e-85a7-daa9effb3e78 00:33:56.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:56.729 Nvme0n1 : 2.00 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:33:56.729 [2024-12-05T13:05:39.316Z] =================================================================================================================== 00:33:56.729 [2024-12-05T13:05:39.316Z] Total : 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:33:56.729 00:33:56.729 true 00:33:56.729 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e889d4-d2dd-496e-85a7-daa9effb3e78 00:33:56.729 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:56.986 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:56.986 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:56.986 14:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 862070 00:33:57.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:57.552 Nvme0n1 : 3.00 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:33:57.552 [2024-12-05T13:05:40.139Z] =================================================================================================================== 00:33:57.552 [2024-12-05T13:05:40.139Z] Total : 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:33:57.552 00:33:58.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:58.488 Nvme0n1 : 4.00 23590.25 92.15 0.00 0.00 0.00 0.00 0.00 00:33:58.488 [2024-12-05T13:05:41.075Z] =================================================================================================================== 00:33:58.488 [2024-12-05T13:05:41.075Z] Total : 23590.25 92.15 0.00 0.00 0.00 0.00 0.00 00:33:58.488 00:33:59.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:59.865 Nvme0n1 : 5.00 23672.80 92.47 0.00 0.00 0.00 0.00 0.00 00:33:59.865 [2024-12-05T13:05:42.452Z] =================================================================================================================== 00:33:59.865 [2024-12-05T13:05:42.452Z] Total : 23672.80 92.47 0.00 0.00 0.00 0.00 0.00 00:33:59.865 00:34:00.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:00.801 Nvme0n1 : 6.00 23738.50 92.73 0.00 0.00 0.00 0.00 0.00 00:34:00.801 [2024-12-05T13:05:43.388Z] =================================================================================================================== 00:34:00.801 [2024-12-05T13:05:43.388Z] Total : 23738.50 92.73 0.00 0.00 0.00 0.00 0.00 00:34:00.801 00:34:01.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:01.737 Nvme0n1 : 7.00 23774.14 92.87 0.00 0.00 0.00 0.00 0.00 00:34:01.737 [2024-12-05T13:05:44.324Z] =================================================================================================================== 00:34:01.737 [2024-12-05T13:05:44.324Z] Total : 23774.14 92.87 0.00 0.00 0.00 0.00 0.00 00:34:01.737 00:34:02.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:02.678 Nvme0n1 : 8.00 23810.75 93.01 0.00 0.00 0.00 0.00 0.00 00:34:02.678 [2024-12-05T13:05:45.265Z] =================================================================================================================== 00:34:02.678 [2024-12-05T13:05:45.265Z] Total : 23810.75 93.01 0.00 0.00 0.00 0.00 0.00 00:34:02.678 00:34:03.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:03.611 Nvme0n1 : 9.00 23842.89 93.14 0.00 0.00 0.00 0.00 0.00 00:34:03.611 [2024-12-05T13:05:46.198Z] =================================================================================================================== 00:34:03.611 [2024-12-05T13:05:46.198Z] Total : 23842.89 93.14 0.00 0.00 0.00 0.00 0.00 00:34:03.611 00:34:04.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:04.548 Nvme0n1 : 10.00 23856.00 93.19 0.00 0.00 0.00 0.00 0.00 00:34:04.548 [2024-12-05T13:05:47.135Z] =================================================================================================================== 00:34:04.548 [2024-12-05T13:05:47.135Z] Total : 23856.00 93.19 0.00 0.00 0.00 0.00 0.00 00:34:04.548 00:34:04.548 00:34:04.548 Latency(us) 00:34:04.548 [2024-12-05T13:05:47.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:04.548 Nvme0n1 : 10.00 23855.40 93.19 0.00 0.00 5362.44 2683.86 26713.72 00:34:04.548 [2024-12-05T13:05:47.136Z] =================================================================================================================== 00:34:04.549 [2024-12-05T13:05:47.136Z] Total : 23855.40 93.19 0.00 0.00 5362.44 2683.86 26713.72 00:34:04.549 { 00:34:04.549 "results": [ 00:34:04.549 { 00:34:04.549 "job": "Nvme0n1", 00:34:04.549 "core_mask": "0x2", 00:34:04.549 "workload": "randwrite", 00:34:04.549 "status": "finished", 00:34:04.549 "queue_depth": 128, 00:34:04.549 "io_size": 4096, 00:34:04.549 "runtime": 10.002933, 00:34:04.549 "iops": 23855.403210238437, 00:34:04.549 "mibps": 93.1851687899939, 00:34:04.549 "io_failed": 0, 00:34:04.549 "io_timeout": 0, 00:34:04.549 "avg_latency_us": 5362.442791113495, 00:34:04.549 "min_latency_us": 2683.8552380952383, 00:34:04.549 "max_latency_us": 26713.721904761904 00:34:04.549 } 00:34:04.549 ], 00:34:04.549 "core_count": 1 00:34:04.549 } 00:34:04.549 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 862014 00:34:04.549 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 862014 ']' 00:34:04.549 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 862014 00:34:04.549 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:34:04.549 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.549 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 862014 00:34:04.808 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:04.808 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:04.808 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 862014' 00:34:04.808 killing process with pid 862014 00:34:04.808 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 862014 00:34:04.808 Received shutdown signal, test time was about 10.000000 seconds 00:34:04.808 00:34:04.808 Latency(us) 00:34:04.808 [2024-12-05T13:05:47.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.808 [2024-12-05T13:05:47.395Z] =================================================================================================================== 00:34:04.808 [2024-12-05T13:05:47.395Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:04.808 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 862014 00:34:04.808 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:05.067 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:05.326 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e889d4-d2dd-496e-85a7-daa9effb3e78 00:34:05.326 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:05.326 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:05.326 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:05.326 14:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:05.585 [2024-12-05 14:05:48.067080] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:05.585 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e889d4-d2dd-496e-85a7-daa9effb3e78 00:34:05.585 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:34:05.585 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e889d4-d2dd-496e-85a7-daa9effb3e78 00:34:05.585 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:05.585 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:05.585 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:05.585 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:05.585 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:05.585 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:05.585 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:05.585 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:05.585 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e889d4-d2dd-496e-85a7-daa9effb3e78 00:34:05.844 request: 00:34:05.844 { 00:34:05.844 "uuid": "13e889d4-d2dd-496e-85a7-daa9effb3e78", 00:34:05.844 "method": "bdev_lvol_get_lvstores", 00:34:05.844 "req_id": 1 00:34:05.844 } 00:34:05.844 Got JSON-RPC error response 00:34:05.844 response: 00:34:05.844 { 00:34:05.844 "code": -19, 00:34:05.844 "message": "No such device" 00:34:05.844 } 00:34:05.844 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:34:05.844 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:05.844 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:05.844 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:05.844 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:06.103 aio_bdev 00:34:06.103 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 13de37c7-0b35-4208-9eb1-7fbc0a4a856a 00:34:06.103 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=13de37c7-0b35-4208-9eb1-7fbc0a4a856a 00:34:06.103 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:06.103 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:34:06.103 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:06.103 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:06.103 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:06.362 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 13de37c7-0b35-4208-9eb1-7fbc0a4a856a -t 2000 00:34:06.362 [ 00:34:06.362 { 00:34:06.362 "name": "13de37c7-0b35-4208-9eb1-7fbc0a4a856a", 00:34:06.362 "aliases": [ 00:34:06.362 "lvs/lvol" 00:34:06.362 ], 00:34:06.362 "product_name": "Logical Volume", 00:34:06.362 "block_size": 4096, 00:34:06.362 "num_blocks": 38912, 00:34:06.362 "uuid": "13de37c7-0b35-4208-9eb1-7fbc0a4a856a", 00:34:06.362 "assigned_rate_limits": { 00:34:06.362 "rw_ios_per_sec": 0, 00:34:06.362 "rw_mbytes_per_sec": 0, 00:34:06.362 "r_mbytes_per_sec": 0, 00:34:06.362 "w_mbytes_per_sec": 0 00:34:06.362 }, 00:34:06.362 "claimed": false, 00:34:06.362 "zoned": false, 00:34:06.362 "supported_io_types": { 00:34:06.362 "read": true, 00:34:06.362 "write": true, 00:34:06.362 "unmap": true, 00:34:06.362 "flush": false, 00:34:06.362 "reset": true, 00:34:06.362 "nvme_admin": false, 00:34:06.362 "nvme_io": false, 00:34:06.362 "nvme_io_md": false, 00:34:06.362 "write_zeroes": true, 00:34:06.362 "zcopy": false, 00:34:06.362 "get_zone_info": false, 00:34:06.362 "zone_management": false, 00:34:06.362 "zone_append": false, 00:34:06.362 "compare": false, 00:34:06.362 "compare_and_write": false, 00:34:06.362 "abort": false, 00:34:06.362 "seek_hole": true, 00:34:06.362 "seek_data": true, 00:34:06.362 "copy": false, 00:34:06.362 "nvme_iov_md": false 00:34:06.362 }, 00:34:06.362 "driver_specific": { 00:34:06.362 "lvol": { 00:34:06.362 "lvol_store_uuid": "13e889d4-d2dd-496e-85a7-daa9effb3e78", 00:34:06.362 "base_bdev": "aio_bdev", 00:34:06.362 "thin_provision": false, 00:34:06.362 "num_allocated_clusters": 38, 00:34:06.362 "snapshot": false, 00:34:06.362 "clone": false, 00:34:06.362 "esnap_clone": false 00:34:06.362 } 00:34:06.362 } 00:34:06.362 } 00:34:06.362 ] 00:34:06.362 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:34:06.362 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e889d4-d2dd-496e-85a7-daa9effb3e78 00:34:06.362 14:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:06.621 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:06.621 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e889d4-d2dd-496e-85a7-daa9effb3e78 00:34:06.621 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:06.880 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:06.880 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 13de37c7-0b35-4208-9eb1-7fbc0a4a856a 00:34:06.880 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 13e889d4-d2dd-496e-85a7-daa9effb3e78 00:34:07.139 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:07.399 00:34:07.399 real 0m15.569s 00:34:07.399 user 0m15.080s 00:34:07.399 sys 0m1.482s 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:07.399 ************************************ 00:34:07.399 END TEST lvs_grow_clean 00:34:07.399 ************************************ 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:07.399 ************************************ 00:34:07.399 START TEST lvs_grow_dirty 00:34:07.399 ************************************ 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:07.399 14:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:07.658 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:07.659 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:07.917 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:07.917 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:07.917 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:08.176 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:08.176 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:08.176 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f lvol 150 00:34:08.176 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=341306e8-947b-4848-8d0e-987cca3a0f17 00:34:08.176 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:08.176 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:08.433 [2024-12-05 14:05:50.910981] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:08.433 [2024-12-05 14:05:50.911110] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:08.433 true 00:34:08.433 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:08.433 14:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:08.692 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:08.692 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:08.950 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 341306e8-947b-4848-8d0e-987cca3a0f17 00:34:08.950 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:09.208 [2024-12-05 14:05:51.647441] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.208 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:09.468 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=864468 00:34:09.468 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:09.468 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:09.468 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 864468 /var/tmp/bdevperf.sock 00:34:09.468 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 864468 ']' 00:34:09.468 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:09.468 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:09.468 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:09.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:09.468 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:09.468 14:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:09.468 [2024-12-05 14:05:51.904266] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:34:09.468 [2024-12-05 14:05:51.904314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864468 ] 00:34:09.468 [2024-12-05 14:05:51.975460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.468 [2024-12-05 14:05:52.017378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:09.726 14:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:09.726 14:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:09.726 14:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:09.985 Nvme0n1 00:34:09.985 14:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:10.244 [ 00:34:10.244 { 00:34:10.244 "name": "Nvme0n1", 00:34:10.244 "aliases": [ 00:34:10.244 "341306e8-947b-4848-8d0e-987cca3a0f17" 00:34:10.244 ], 00:34:10.244 "product_name": "NVMe disk", 00:34:10.244 "block_size": 4096, 00:34:10.244 "num_blocks": 38912, 00:34:10.244 "uuid": "341306e8-947b-4848-8d0e-987cca3a0f17", 00:34:10.244 "numa_id": 1, 00:34:10.244 "assigned_rate_limits": { 00:34:10.244 "rw_ios_per_sec": 0, 00:34:10.244 "rw_mbytes_per_sec": 0, 00:34:10.244 "r_mbytes_per_sec": 0, 00:34:10.244 "w_mbytes_per_sec": 0 00:34:10.244 }, 00:34:10.244 "claimed": false, 00:34:10.244 "zoned": false, 00:34:10.244 "supported_io_types": { 00:34:10.244 "read": true, 00:34:10.244 "write": true, 00:34:10.244 "unmap": true, 00:34:10.244 "flush": true, 00:34:10.244 "reset": true, 00:34:10.244 "nvme_admin": true, 00:34:10.244 "nvme_io": true, 00:34:10.244 "nvme_io_md": false, 00:34:10.244 "write_zeroes": true, 00:34:10.244 "zcopy": false, 00:34:10.244 "get_zone_info": false, 00:34:10.244 "zone_management": false, 00:34:10.244 "zone_append": false, 00:34:10.244 "compare": true, 00:34:10.244 "compare_and_write": true, 00:34:10.244 "abort": true, 00:34:10.244 "seek_hole": false, 00:34:10.244 "seek_data": false, 00:34:10.244 "copy": true, 00:34:10.244 "nvme_iov_md": false 00:34:10.244 }, 00:34:10.244 "memory_domains": [ 00:34:10.244 { 00:34:10.244 "dma_device_id": "system", 00:34:10.244 "dma_device_type": 1 00:34:10.244 } 00:34:10.244 ], 00:34:10.244 "driver_specific": { 00:34:10.244 "nvme": [ 00:34:10.244 { 00:34:10.244 "trid": { 00:34:10.244 "trtype": "TCP", 00:34:10.244 "adrfam": "IPv4", 00:34:10.244 "traddr": "10.0.0.2", 00:34:10.244 "trsvcid": "4420", 00:34:10.244 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:10.244 }, 00:34:10.244 "ctrlr_data": { 00:34:10.244 "cntlid": 1, 00:34:10.244 "vendor_id": "0x8086", 00:34:10.244 "model_number": "SPDK bdev Controller", 00:34:10.244 "serial_number": "SPDK0", 00:34:10.244 "firmware_revision": "25.01", 00:34:10.244 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:10.244 "oacs": { 00:34:10.244 "security": 0, 00:34:10.244 "format": 0, 00:34:10.244 "firmware": 0, 00:34:10.244 "ns_manage": 0 00:34:10.244 }, 00:34:10.244 "multi_ctrlr": true, 00:34:10.244 "ana_reporting": false 00:34:10.244 }, 00:34:10.244 "vs": { 00:34:10.244 "nvme_version": "1.3" 00:34:10.244 }, 00:34:10.244 "ns_data": { 00:34:10.244 "id": 1, 00:34:10.244 "can_share": true 00:34:10.244 } 00:34:10.244 } 00:34:10.244 ], 00:34:10.244 "mp_policy": "active_passive" 00:34:10.244 } 00:34:10.244 } 00:34:10.244 ] 00:34:10.244 14:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=864648 00:34:10.244 14:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:10.245 14:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:10.245 Running I/O for 10 seconds... 00:34:11.176 Latency(us) 00:34:11.176 [2024-12-05T13:05:53.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:11.176 Nvme0n1 : 1.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:34:11.176 [2024-12-05T13:05:53.763Z] =================================================================================================================== 00:34:11.176 [2024-12-05T13:05:53.763Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:34:11.176 00:34:12.111 14:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:12.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:12.111 Nvme0n1 : 2.00 23558.50 92.03 0.00 0.00 0.00 0.00 0.00 00:34:12.111 [2024-12-05T13:05:54.698Z] =================================================================================================================== 00:34:12.111 [2024-12-05T13:05:54.698Z] Total : 23558.50 92.03 0.00 0.00 0.00 0.00 0.00 00:34:12.111 00:34:12.369 true 00:34:12.369 14:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:12.369 14:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:12.627 14:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:12.627 14:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:12.627 14:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 864648 00:34:13.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:13.190 Nvme0n1 : 3.00 23622.00 92.27 0.00 0.00 0.00 0.00 0.00 00:34:13.190 [2024-12-05T13:05:55.777Z] =================================================================================================================== 00:34:13.190 [2024-12-05T13:05:55.777Z] Total : 23622.00 92.27 0.00 0.00 0.00 0.00 0.00 00:34:13.190 00:34:14.124 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:14.124 Nvme0n1 : 4.00 23653.75 92.40 0.00 0.00 0.00 0.00 0.00 00:34:14.124 [2024-12-05T13:05:56.711Z] =================================================================================================================== 00:34:14.124 [2024-12-05T13:05:56.711Z] Total : 23653.75 92.40 0.00 0.00 0.00 0.00 0.00 00:34:14.124 00:34:15.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:15.501 Nvme0n1 : 5.00 23698.20 92.57 0.00 0.00 0.00 0.00 0.00 00:34:15.501 [2024-12-05T13:05:58.088Z] =================================================================================================================== 00:34:15.501 [2024-12-05T13:05:58.088Z] Total : 23698.20 92.57 0.00 0.00 0.00 0.00 0.00 00:34:15.501 00:34:16.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:16.439 Nvme0n1 : 6.00 23749.00 92.77 0.00 0.00 0.00 0.00 0.00 00:34:16.439 [2024-12-05T13:05:59.026Z] =================================================================================================================== 00:34:16.439 [2024-12-05T13:05:59.026Z] Total : 23749.00 92.77 0.00 0.00 0.00 0.00 0.00 00:34:16.439 00:34:17.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:17.394 Nvme0n1 : 7.00 23785.29 92.91 0.00 0.00 0.00 0.00 0.00 00:34:17.394 [2024-12-05T13:05:59.981Z] =================================================================================================================== 00:34:17.394 [2024-12-05T13:05:59.981Z] Total : 23785.29 92.91 0.00 0.00 0.00 0.00 0.00 00:34:17.394 00:34:18.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:18.332 Nvme0n1 : 8.00 23796.62 92.96 0.00 0.00 0.00 0.00 0.00 00:34:18.332 [2024-12-05T13:06:00.919Z] =================================================================================================================== 00:34:18.332 [2024-12-05T13:06:00.919Z] Total : 23796.62 92.96 0.00 0.00 0.00 0.00 0.00 00:34:18.332 00:34:19.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:19.268 Nvme0n1 : 9.00 23791.33 92.93 0.00 0.00 0.00 0.00 0.00 00:34:19.268 [2024-12-05T13:06:01.855Z] =================================================================================================================== 00:34:19.268 [2024-12-05T13:06:01.855Z] Total : 23791.33 92.93 0.00 0.00 0.00 0.00 0.00 00:34:19.268 00:34:20.204 00:34:20.204 Latency(us) 00:34:20.204 [2024-12-05T13:06:02.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:20.204 Nvme0n1 : 10.00 23809.80 93.01 0.00 0.00 5372.87 4837.18 25465.42 00:34:20.204 [2024-12-05T13:06:02.791Z] =================================================================================================================== 00:34:20.204 [2024-12-05T13:06:02.791Z] Total : 23809.80 93.01 0.00 0.00 5372.87 4837.18 25465.42 00:34:20.204 { 00:34:20.204 "results": [ 00:34:20.204 { 00:34:20.204 "job": "Nvme0n1", 00:34:20.204 "core_mask": "0x2", 00:34:20.204 "workload": "randwrite", 00:34:20.204 "status": "finished", 00:34:20.204 "queue_depth": 128, 00:34:20.204 "io_size": 4096, 00:34:20.204 "runtime": 10.001178, 00:34:20.204 "iops": 23809.79520612472, 00:34:20.204 "mibps": 93.00701252392469, 00:34:20.204 "io_failed": 0, 00:34:20.204 "io_timeout": 0, 00:34:20.204 "avg_latency_us": 5372.870048677712, 00:34:20.204 "min_latency_us": 4837.1809523809525, 00:34:20.204 "max_latency_us": 25465.417142857143 00:34:20.204 } 00:34:20.204 ], 00:34:20.204 "core_count": 1 00:34:20.204 } 00:34:20.204 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 864468 00:34:20.204 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 864468 ']' 00:34:20.204 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 864468 00:34:20.204 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:34:20.204 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:20.204 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 864468 00:34:20.204 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:20.204 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:20.204 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 864468' 00:34:20.204 killing process with pid 864468 00:34:20.204 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 864468 00:34:20.204 Received shutdown signal, test time was about 10.000000 seconds 00:34:20.204 00:34:20.204 Latency(us) 00:34:20.204 [2024-12-05T13:06:02.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.204 [2024-12-05T13:06:02.791Z] =================================================================================================================== 00:34:20.204 [2024-12-05T13:06:02.791Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:20.204 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 864468 00:34:20.463 14:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:20.722 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:20.981 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:20.981 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:20.981 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:20.981 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:20.981 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 861386 00:34:20.981 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 861386 00:34:20.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 861386 Killed "${NVMF_APP[@]}" "$@" 00:34:20.981 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:20.981 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:20.981 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:20.981 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:20.981 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=866525 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 866525 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 866525 ']' 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:21.240 [2024-12-05 14:06:03.612393] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:21.240 [2024-12-05 14:06:03.613328] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:34:21.240 [2024-12-05 14:06:03.613363] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:21.240 [2024-12-05 14:06:03.691906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.240 [2024-12-05 14:06:03.732198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:21.240 [2024-12-05 14:06:03.732234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:21.240 [2024-12-05 14:06:03.732241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:21.240 [2024-12-05 14:06:03.732247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:21.240 [2024-12-05 14:06:03.732252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:21.240 [2024-12-05 14:06:03.732813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.240 [2024-12-05 14:06:03.800707] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:21.240 [2024-12-05 14:06:03.800897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:21.240 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:21.500 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.500 14:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:21.500 [2024-12-05 14:06:04.038191] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:21.500 [2024-12-05 14:06:04.038411] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:21.500 [2024-12-05 14:06:04.038505] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:21.500 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:21.500 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 341306e8-947b-4848-8d0e-987cca3a0f17 00:34:21.500 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=341306e8-947b-4848-8d0e-987cca3a0f17 00:34:21.500 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:21.500 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:21.500 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:21.500 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:21.500 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:21.760 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 341306e8-947b-4848-8d0e-987cca3a0f17 -t 2000 00:34:22.018 [ 00:34:22.018 { 00:34:22.019 "name": "341306e8-947b-4848-8d0e-987cca3a0f17", 00:34:22.019 "aliases": [ 00:34:22.019 "lvs/lvol" 00:34:22.019 ], 00:34:22.019 "product_name": "Logical Volume", 00:34:22.019 "block_size": 4096, 00:34:22.019 "num_blocks": 38912, 00:34:22.019 "uuid": "341306e8-947b-4848-8d0e-987cca3a0f17", 00:34:22.019 "assigned_rate_limits": { 00:34:22.019 "rw_ios_per_sec": 0, 00:34:22.019 "rw_mbytes_per_sec": 0, 00:34:22.019 "r_mbytes_per_sec": 0, 00:34:22.019 "w_mbytes_per_sec": 0 00:34:22.019 }, 00:34:22.019 "claimed": false, 00:34:22.019 "zoned": false, 00:34:22.019 "supported_io_types": { 00:34:22.019 "read": true, 00:34:22.019 "write": true, 00:34:22.019 "unmap": true, 00:34:22.019 "flush": false, 00:34:22.019 "reset": true, 00:34:22.019 "nvme_admin": false, 00:34:22.019 "nvme_io": false, 00:34:22.019 "nvme_io_md": false, 00:34:22.019 "write_zeroes": true, 00:34:22.019 "zcopy": false, 00:34:22.019 "get_zone_info": false, 00:34:22.019 "zone_management": false, 00:34:22.019 "zone_append": false, 00:34:22.019 "compare": false, 00:34:22.019 "compare_and_write": false, 00:34:22.019 "abort": false, 00:34:22.019 "seek_hole": true, 00:34:22.019 "seek_data": true, 00:34:22.019 "copy": false, 00:34:22.019 "nvme_iov_md": false 00:34:22.019 }, 00:34:22.019 "driver_specific": { 00:34:22.019 "lvol": { 00:34:22.019 "lvol_store_uuid": "8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f", 00:34:22.019 "base_bdev": "aio_bdev", 00:34:22.019 "thin_provision": false, 00:34:22.019 "num_allocated_clusters": 38, 00:34:22.019 "snapshot": false, 00:34:22.019 "clone": false, 00:34:22.019 "esnap_clone": false 00:34:22.019 } 00:34:22.019 } 00:34:22.019 } 00:34:22.019 ] 00:34:22.019 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:22.019 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:22.019 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:22.278 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:22.278 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:22.278 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:22.278 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:22.278 14:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:22.537 [2024-12-05 14:06:04.989274] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:22.537 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:22.537 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:34:22.537 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:22.537 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:22.537 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:22.537 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:22.537 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:22.537 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:22.537 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:22.537 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:22.537 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:22.537 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:22.796 request: 00:34:22.796 { 00:34:22.796 "uuid": "8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f", 00:34:22.796 "method": "bdev_lvol_get_lvstores", 00:34:22.796 "req_id": 1 00:34:22.796 } 00:34:22.796 Got JSON-RPC error response 00:34:22.796 response: 00:34:22.796 { 00:34:22.796 "code": -19, 00:34:22.796 "message": "No such device" 00:34:22.796 } 00:34:22.796 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:34:22.796 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:22.796 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:22.796 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:22.796 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:23.055 aio_bdev 00:34:23.055 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 341306e8-947b-4848-8d0e-987cca3a0f17 00:34:23.055 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=341306e8-947b-4848-8d0e-987cca3a0f17 00:34:23.055 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:23.055 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:23.055 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:23.055 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:23.055 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:23.312 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 341306e8-947b-4848-8d0e-987cca3a0f17 -t 2000 00:34:23.312 [ 00:34:23.312 { 00:34:23.312 "name": "341306e8-947b-4848-8d0e-987cca3a0f17", 00:34:23.312 "aliases": [ 00:34:23.312 "lvs/lvol" 00:34:23.312 ], 00:34:23.313 "product_name": "Logical Volume", 00:34:23.313 "block_size": 4096, 00:34:23.313 "num_blocks": 38912, 00:34:23.313 "uuid": "341306e8-947b-4848-8d0e-987cca3a0f17", 00:34:23.313 "assigned_rate_limits": { 00:34:23.313 "rw_ios_per_sec": 0, 00:34:23.313 "rw_mbytes_per_sec": 0, 00:34:23.313 "r_mbytes_per_sec": 0, 00:34:23.313 "w_mbytes_per_sec": 0 00:34:23.313 }, 00:34:23.313 "claimed": false, 00:34:23.313 "zoned": false, 00:34:23.313 "supported_io_types": { 00:34:23.313 "read": true, 00:34:23.313 "write": true, 00:34:23.313 "unmap": true, 00:34:23.313 "flush": false, 00:34:23.313 "reset": true, 00:34:23.313 "nvme_admin": false, 00:34:23.313 "nvme_io": false, 00:34:23.313 "nvme_io_md": false, 00:34:23.313 "write_zeroes": true, 00:34:23.313 "zcopy": false, 00:34:23.313 "get_zone_info": false, 00:34:23.313 "zone_management": false, 00:34:23.313 "zone_append": false, 00:34:23.313 "compare": false, 00:34:23.313 "compare_and_write": false, 00:34:23.313 "abort": false, 00:34:23.313 "seek_hole": true, 00:34:23.313 "seek_data": true, 00:34:23.313 "copy": false, 00:34:23.313 "nvme_iov_md": false 00:34:23.313 }, 00:34:23.313 "driver_specific": { 00:34:23.313 "lvol": { 00:34:23.313 "lvol_store_uuid": "8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f", 00:34:23.313 "base_bdev": "aio_bdev", 00:34:23.313 "thin_provision": false, 00:34:23.313 "num_allocated_clusters": 38, 00:34:23.313 "snapshot": false, 00:34:23.313 "clone": false, 00:34:23.313 "esnap_clone": false 00:34:23.313 } 00:34:23.313 } 00:34:23.313 } 00:34:23.313 ] 00:34:23.313 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:23.313 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:23.313 14:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:23.571 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:23.571 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:23.571 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:23.829 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:23.829 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 341306e8-947b-4848-8d0e-987cca3a0f17 00:34:23.829 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b005ec0-c834-4e7c-80be-2a2aeb8b0f1f 00:34:24.088 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:24.347 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:24.347 00:34:24.347 real 0m16.915s 00:34:24.347 user 0m34.507s 00:34:24.347 sys 0m3.643s 00:34:24.347 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:24.347 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:24.347 ************************************ 00:34:24.347 END TEST lvs_grow_dirty 00:34:24.347 ************************************ 00:34:24.347 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:24.347 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:34:24.347 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:34:24.347 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:34:24.347 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:24.347 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:34:24.347 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:34:24.347 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:34:24.347 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:24.347 nvmf_trace.0 00:34:24.606 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:34:24.606 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:24.606 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:24.606 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:24.606 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:24.606 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:24.606 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:24.606 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:24.606 rmmod nvme_tcp 00:34:24.606 rmmod nvme_fabrics 00:34:24.606 rmmod nvme_keyring 00:34:24.606 14:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:24.606 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:24.606 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:24.607 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 866525 ']' 00:34:24.607 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 866525 00:34:24.607 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 866525 ']' 00:34:24.607 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 866525 00:34:24.607 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:34:24.607 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:24.607 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 866525 00:34:24.607 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:24.607 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:24.607 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 866525' 00:34:24.607 killing process with pid 866525 00:34:24.607 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 866525 00:34:24.607 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 866525 00:34:24.869 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:24.869 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:24.869 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:24.869 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:24.869 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:34:24.869 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:24.869 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:34:24.869 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:24.869 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:24.869 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.869 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:24.869 14:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.891 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:26.891 00:34:26.891 real 0m42.312s 00:34:26.891 user 0m52.289s 00:34:26.891 sys 0m10.019s 00:34:26.891 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.891 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:26.891 ************************************ 00:34:26.891 END TEST nvmf_lvs_grow 00:34:26.891 ************************************ 00:34:26.891 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:26.891 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:26.891 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.891 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:26.891 ************************************ 00:34:26.891 START TEST nvmf_bdev_io_wait 00:34:26.891 ************************************ 00:34:26.891 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:26.891 * Looking for test storage... 00:34:26.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:26.891 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:26.891 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:34:26.891 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.150 --rc genhtml_branch_coverage=1 00:34:27.150 --rc genhtml_function_coverage=1 00:34:27.150 --rc genhtml_legend=1 00:34:27.150 --rc geninfo_all_blocks=1 00:34:27.150 --rc geninfo_unexecuted_blocks=1 00:34:27.150 00:34:27.150 ' 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.150 --rc genhtml_branch_coverage=1 00:34:27.150 --rc genhtml_function_coverage=1 00:34:27.150 --rc genhtml_legend=1 00:34:27.150 --rc geninfo_all_blocks=1 00:34:27.150 --rc geninfo_unexecuted_blocks=1 00:34:27.150 00:34:27.150 ' 00:34:27.150 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.150 --rc genhtml_branch_coverage=1 00:34:27.150 --rc genhtml_function_coverage=1 00:34:27.150 --rc genhtml_legend=1 00:34:27.150 --rc geninfo_all_blocks=1 00:34:27.151 --rc geninfo_unexecuted_blocks=1 00:34:27.151 00:34:27.151 ' 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:27.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.151 --rc genhtml_branch_coverage=1 00:34:27.151 --rc genhtml_function_coverage=1 00:34:27.151 --rc genhtml_legend=1 00:34:27.151 --rc geninfo_all_blocks=1 00:34:27.151 --rc geninfo_unexecuted_blocks=1 00:34:27.151 00:34:27.151 ' 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:34:27.151 14:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:33.724 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:33.724 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:33.724 Found net devices under 0000:86:00.0: cvl_0_0 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:33.724 Found net devices under 0000:86:00.1: cvl_0_1 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:33.724 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:33.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:33.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:34:33.725 00:34:33.725 --- 10.0.0.2 ping statistics --- 00:34:33.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.725 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:33.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:33.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:34:33.725 00:34:33.725 --- 10.0.0.1 ping statistics --- 00:34:33.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.725 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=871035 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 871035 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 871035 ']' 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:33.725 [2024-12-05 14:06:15.511774] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:33.725 [2024-12-05 14:06:15.512690] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:34:33.725 [2024-12-05 14:06:15.512724] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:33.725 [2024-12-05 14:06:15.591374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:33.725 [2024-12-05 14:06:15.634520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:33.725 [2024-12-05 14:06:15.634558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:33.725 [2024-12-05 14:06:15.634565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:33.725 [2024-12-05 14:06:15.634588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:33.725 [2024-12-05 14:06:15.634593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:33.725 [2024-12-05 14:06:15.635995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:33.725 [2024-12-05 14:06:15.636104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:33.725 [2024-12-05 14:06:15.636211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:33.725 [2024-12-05 14:06:15.636212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:33.725 [2024-12-05 14:06:15.636483] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:33.725 [2024-12-05 14:06:15.767890] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:33.725 [2024-12-05 14:06:15.768062] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:33.725 [2024-12-05 14:06:15.768519] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:33.725 [2024-12-05 14:06:15.768583] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:33.725 [2024-12-05 14:06:15.780876] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:33.725 Malloc0 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.725 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:33.726 [2024-12-05 14:06:15.853168] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=871068 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=871070 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:33.726 { 00:34:33.726 "params": { 00:34:33.726 "name": "Nvme$subsystem", 00:34:33.726 "trtype": "$TEST_TRANSPORT", 00:34:33.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.726 "adrfam": "ipv4", 00:34:33.726 "trsvcid": "$NVMF_PORT", 00:34:33.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.726 "hdgst": ${hdgst:-false}, 00:34:33.726 "ddgst": ${ddgst:-false} 00:34:33.726 }, 00:34:33.726 "method": "bdev_nvme_attach_controller" 00:34:33.726 } 00:34:33.726 EOF 00:34:33.726 )") 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=871072 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=871075 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:33.726 { 00:34:33.726 "params": { 00:34:33.726 "name": "Nvme$subsystem", 00:34:33.726 "trtype": "$TEST_TRANSPORT", 00:34:33.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.726 "adrfam": "ipv4", 00:34:33.726 "trsvcid": "$NVMF_PORT", 00:34:33.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.726 "hdgst": ${hdgst:-false}, 00:34:33.726 "ddgst": ${ddgst:-false} 00:34:33.726 }, 00:34:33.726 "method": "bdev_nvme_attach_controller" 00:34:33.726 } 00:34:33.726 EOF 00:34:33.726 )") 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:33.726 { 00:34:33.726 "params": { 00:34:33.726 "name": "Nvme$subsystem", 00:34:33.726 "trtype": "$TEST_TRANSPORT", 00:34:33.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.726 "adrfam": "ipv4", 00:34:33.726 "trsvcid": "$NVMF_PORT", 00:34:33.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.726 "hdgst": ${hdgst:-false}, 00:34:33.726 "ddgst": ${ddgst:-false} 00:34:33.726 }, 00:34:33.726 "method": "bdev_nvme_attach_controller" 00:34:33.726 } 00:34:33.726 EOF 00:34:33.726 )") 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:33.726 { 00:34:33.726 "params": { 00:34:33.726 "name": "Nvme$subsystem", 00:34:33.726 "trtype": "$TEST_TRANSPORT", 00:34:33.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.726 "adrfam": "ipv4", 00:34:33.726 "trsvcid": "$NVMF_PORT", 00:34:33.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.726 "hdgst": ${hdgst:-false}, 00:34:33.726 "ddgst": ${ddgst:-false} 00:34:33.726 }, 00:34:33.726 "method": "bdev_nvme_attach_controller" 00:34:33.726 } 00:34:33.726 EOF 00:34:33.726 )") 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 871068 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:33.726 "params": { 00:34:33.726 "name": "Nvme1", 00:34:33.726 "trtype": "tcp", 00:34:33.726 "traddr": "10.0.0.2", 00:34:33.726 "adrfam": "ipv4", 00:34:33.726 "trsvcid": "4420", 00:34:33.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:33.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:33.726 "hdgst": false, 00:34:33.726 "ddgst": false 00:34:33.726 }, 00:34:33.726 "method": "bdev_nvme_attach_controller" 00:34:33.726 }' 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:33.726 "params": { 00:34:33.726 "name": "Nvme1", 00:34:33.726 "trtype": "tcp", 00:34:33.726 "traddr": "10.0.0.2", 00:34:33.726 "adrfam": "ipv4", 00:34:33.726 "trsvcid": "4420", 00:34:33.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:33.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:33.726 "hdgst": false, 00:34:33.726 "ddgst": false 00:34:33.726 }, 00:34:33.726 "method": "bdev_nvme_attach_controller" 00:34:33.726 }' 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:33.726 "params": { 00:34:33.726 "name": "Nvme1", 00:34:33.726 "trtype": "tcp", 00:34:33.726 "traddr": "10.0.0.2", 00:34:33.726 "adrfam": "ipv4", 00:34:33.726 "trsvcid": "4420", 00:34:33.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:33.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:33.726 "hdgst": false, 00:34:33.726 "ddgst": false 00:34:33.726 }, 00:34:33.726 "method": "bdev_nvme_attach_controller" 00:34:33.726 }' 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:33.726 14:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:33.726 "params": { 00:34:33.726 "name": "Nvme1", 00:34:33.726 "trtype": "tcp", 00:34:33.726 "traddr": "10.0.0.2", 00:34:33.726 "adrfam": "ipv4", 00:34:33.726 "trsvcid": "4420", 00:34:33.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:33.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:33.726 "hdgst": false, 00:34:33.726 "ddgst": false 00:34:33.726 }, 00:34:33.726 "method": "bdev_nvme_attach_controller" 00:34:33.727 }' 00:34:33.727 [2024-12-05 14:06:15.902280] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:34:33.727 [2024-12-05 14:06:15.902332] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:33.727 [2024-12-05 14:06:15.905891] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:34:33.727 [2024-12-05 14:06:15.905933] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:33.727 [2024-12-05 14:06:15.906091] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:34:33.727 [2024-12-05 14:06:15.906129] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:33.727 [2024-12-05 14:06:15.908984] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:34:33.727 [2024-12-05 14:06:15.909026] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:33.727 [2024-12-05 14:06:16.085516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.727 [2024-12-05 14:06:16.128231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:33.727 [2024-12-05 14:06:16.179678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.727 [2024-12-05 14:06:16.233424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:33.727 [2024-12-05 14:06:16.233860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.727 [2024-12-05 14:06:16.273884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:33.727 [2024-12-05 14:06:16.295858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.985 [2024-12-05 14:06:16.338138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:33.985 Running I/O for 1 seconds... 00:34:33.985 Running I/O for 1 seconds... 00:34:33.985 Running I/O for 1 seconds... 00:34:34.243 Running I/O for 1 seconds... 00:34:35.177 14744.00 IOPS, 57.59 MiB/s 00:34:35.177 Latency(us) 00:34:35.177 [2024-12-05T13:06:17.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.177 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:35.177 Nvme1n1 : 1.01 14804.35 57.83 0.00 0.00 8622.30 3198.78 10735.42 00:34:35.177 [2024-12-05T13:06:17.764Z] =================================================================================================================== 00:34:35.177 [2024-12-05T13:06:17.764Z] Total : 14804.35 57.83 0.00 0.00 8622.30 3198.78 10735.42 00:34:35.177 10681.00 IOPS, 41.72 MiB/s 00:34:35.177 Latency(us) 00:34:35.177 [2024-12-05T13:06:17.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.177 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:35.177 Nvme1n1 : 1.01 10753.90 42.01 0.00 0.00 11864.15 1482.36 15166.90 00:34:35.177 [2024-12-05T13:06:17.764Z] =================================================================================================================== 00:34:35.177 [2024-12-05T13:06:17.764Z] Total : 10753.90 42.01 0.00 0.00 11864.15 1482.36 15166.90 00:34:35.177 243624.00 IOPS, 951.66 MiB/s 00:34:35.177 Latency(us) 00:34:35.177 [2024-12-05T13:06:17.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.177 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:35.177 Nvme1n1 : 1.00 243263.94 950.25 0.00 0.00 523.63 219.43 1490.16 00:34:35.177 [2024-12-05T13:06:17.764Z] =================================================================================================================== 00:34:35.177 [2024-12-05T13:06:17.764Z] Total : 243263.94 950.25 0.00 0.00 523.63 219.43 1490.16 00:34:35.177 10659.00 IOPS, 41.64 MiB/s 00:34:35.177 Latency(us) 00:34:35.177 [2024-12-05T13:06:17.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.177 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:35.177 Nvme1n1 : 1.01 10736.04 41.94 0.00 0.00 11888.53 3978.97 17226.61 00:34:35.177 [2024-12-05T13:06:17.764Z] =================================================================================================================== 00:34:35.177 [2024-12-05T13:06:17.764Z] Total : 10736.04 41.94 0.00 0.00 11888.53 3978.97 17226.61 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 871070 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 871072 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 871075 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:35.177 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:35.177 rmmod nvme_tcp 00:34:35.177 rmmod nvme_fabrics 00:34:35.435 rmmod nvme_keyring 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 871035 ']' 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 871035 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 871035 ']' 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 871035 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 871035 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 871035' 00:34:35.435 killing process with pid 871035 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 871035 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 871035 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.435 14:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.973 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:37.973 00:34:37.973 real 0m10.691s 00:34:37.973 user 0m14.870s 00:34:37.973 sys 0m6.569s 00:34:37.973 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:37.973 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:37.973 ************************************ 00:34:37.973 END TEST nvmf_bdev_io_wait 00:34:37.973 ************************************ 00:34:37.973 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:37.973 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:37.973 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:37.973 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:37.973 ************************************ 00:34:37.973 START TEST nvmf_queue_depth 00:34:37.973 ************************************ 00:34:37.973 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:37.973 * Looking for test storage... 00:34:37.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:37.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.974 --rc genhtml_branch_coverage=1 00:34:37.974 --rc genhtml_function_coverage=1 00:34:37.974 --rc genhtml_legend=1 00:34:37.974 --rc geninfo_all_blocks=1 00:34:37.974 --rc geninfo_unexecuted_blocks=1 00:34:37.974 00:34:37.974 ' 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:37.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.974 --rc genhtml_branch_coverage=1 00:34:37.974 --rc genhtml_function_coverage=1 00:34:37.974 --rc genhtml_legend=1 00:34:37.974 --rc geninfo_all_blocks=1 00:34:37.974 --rc geninfo_unexecuted_blocks=1 00:34:37.974 00:34:37.974 ' 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:37.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.974 --rc genhtml_branch_coverage=1 00:34:37.974 --rc genhtml_function_coverage=1 00:34:37.974 --rc genhtml_legend=1 00:34:37.974 --rc geninfo_all_blocks=1 00:34:37.974 --rc geninfo_unexecuted_blocks=1 00:34:37.974 00:34:37.974 ' 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:37.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.974 --rc genhtml_branch_coverage=1 00:34:37.974 --rc genhtml_function_coverage=1 00:34:37.974 --rc genhtml_legend=1 00:34:37.974 --rc geninfo_all_blocks=1 00:34:37.974 --rc geninfo_unexecuted_blocks=1 00:34:37.974 00:34:37.974 ' 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:37.974 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:34:37.975 14:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:44.551 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:44.551 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:44.551 Found net devices under 0000:86:00.0: cvl_0_0 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:44.551 Found net devices under 0000:86:00.1: cvl_0_1 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:44.551 14:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:44.551 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:44.551 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:44.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:44.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:34:44.552 00:34:44.552 --- 10.0.0.2 ping statistics --- 00:34:44.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.552 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:44.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:44.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:34:44.552 00:34:44.552 --- 10.0.0.1 ping statistics --- 00:34:44.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.552 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=874847 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 874847 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 874847 ']' 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.552 14:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:44.552 [2024-12-05 14:06:26.320153] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:44.552 [2024-12-05 14:06:26.321041] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:34:44.552 [2024-12-05 14:06:26.321075] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:44.552 [2024-12-05 14:06:26.402860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.552 [2024-12-05 14:06:26.444004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.552 [2024-12-05 14:06:26.444039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.552 [2024-12-05 14:06:26.444046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:44.552 [2024-12-05 14:06:26.444052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:44.552 [2024-12-05 14:06:26.444057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.552 [2024-12-05 14:06:26.444657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.552 [2024-12-05 14:06:26.512555] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:44.552 [2024-12-05 14:06:26.512758] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:44.812 [2024-12-05 14:06:27.205273] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:44.812 Malloc0 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:44.812 [2024-12-05 14:06:27.285481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=875089 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 875089 /var/tmp/bdevperf.sock 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 875089 ']' 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:44.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.812 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:44.812 [2024-12-05 14:06:27.333238] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:34:44.812 [2024-12-05 14:06:27.333280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875089 ] 00:34:45.072 [2024-12-05 14:06:27.407285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.072 [2024-12-05 14:06:27.452351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.072 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:45.072 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:45.072 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:45.072 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.072 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:45.072 NVMe0n1 00:34:45.072 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.072 14:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:45.332 Running I/O for 10 seconds... 00:34:47.210 12026.00 IOPS, 46.98 MiB/s [2024-12-05T13:06:30.734Z] 12294.50 IOPS, 48.03 MiB/s [2024-12-05T13:06:32.114Z] 12442.33 IOPS, 48.60 MiB/s [2024-12-05T13:06:33.053Z] 12542.75 IOPS, 49.00 MiB/s [2024-12-05T13:06:33.990Z] 12562.20 IOPS, 49.07 MiB/s [2024-12-05T13:06:34.927Z] 12606.17 IOPS, 49.24 MiB/s [2024-12-05T13:06:35.863Z] 12601.57 IOPS, 49.22 MiB/s [2024-12-05T13:06:36.799Z] 12661.25 IOPS, 49.46 MiB/s [2024-12-05T13:06:38.174Z] 12656.78 IOPS, 49.44 MiB/s [2024-12-05T13:06:38.174Z] 12677.60 IOPS, 49.52 MiB/s 00:34:55.587 Latency(us) 00:34:55.587 [2024-12-05T13:06:38.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.587 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:55.587 Verification LBA range: start 0x0 length 0x4000 00:34:55.587 NVMe0n1 : 10.06 12684.34 49.55 0.00 0.00 80444.47 18849.40 52428.80 00:34:55.587 [2024-12-05T13:06:38.174Z] =================================================================================================================== 00:34:55.587 [2024-12-05T13:06:38.174Z] Total : 12684.34 49.55 0.00 0.00 80444.47 18849.40 52428.80 00:34:55.587 { 00:34:55.587 "results": [ 00:34:55.587 { 00:34:55.587 "job": "NVMe0n1", 00:34:55.587 "core_mask": "0x1", 00:34:55.587 "workload": "verify", 00:34:55.587 "status": "finished", 00:34:55.587 "verify_range": { 00:34:55.587 "start": 0, 00:34:55.587 "length": 16384 00:34:55.588 }, 00:34:55.588 "queue_depth": 1024, 00:34:55.588 "io_size": 4096, 00:34:55.588 "runtime": 10.06225, 00:34:55.588 "iops": 12684.339983602076, 00:34:55.588 "mibps": 49.54820306094561, 00:34:55.588 "io_failed": 0, 00:34:55.588 "io_timeout": 0, 00:34:55.588 "avg_latency_us": 80444.47277726726, 00:34:55.588 "min_latency_us": 18849.401904761904, 00:34:55.588 "max_latency_us": 52428.8 00:34:55.588 } 00:34:55.588 ], 00:34:55.588 "core_count": 1 00:34:55.588 } 00:34:55.588 14:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 875089 00:34:55.588 14:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 875089 ']' 00:34:55.588 14:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 875089 00:34:55.588 14:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:55.588 14:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:55.588 14:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 875089 00:34:55.588 14:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:55.588 14:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:55.588 14:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 875089' 00:34:55.588 killing process with pid 875089 00:34:55.588 14:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 875089 00:34:55.588 Received shutdown signal, test time was about 10.000000 seconds 00:34:55.588 00:34:55.588 Latency(us) 00:34:55.588 [2024-12-05T13:06:38.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.588 [2024-12-05T13:06:38.175Z] =================================================================================================================== 00:34:55.588 [2024-12-05T13:06:38.175Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:55.588 14:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 875089 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:55.588 rmmod nvme_tcp 00:34:55.588 rmmod nvme_fabrics 00:34:55.588 rmmod nvme_keyring 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 874847 ']' 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 874847 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 874847 ']' 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 874847 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 874847 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 874847' 00:34:55.588 killing process with pid 874847 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 874847 00:34:55.588 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 874847 00:34:55.846 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:55.846 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:55.846 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:55.846 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:55.846 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:34:55.846 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:55.846 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:34:55.846 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:55.846 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:55.846 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.846 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:55.846 14:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:58.418 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:58.418 00:34:58.418 real 0m20.258s 00:34:58.418 user 0m22.754s 00:34:58.418 sys 0m6.292s 00:34:58.418 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:58.418 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:58.418 ************************************ 00:34:58.418 END TEST nvmf_queue_depth 00:34:58.418 ************************************ 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:58.419 ************************************ 00:34:58.419 START TEST nvmf_target_multipath 00:34:58.419 ************************************ 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:58.419 * Looking for test storage... 00:34:58.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:58.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.419 --rc genhtml_branch_coverage=1 00:34:58.419 --rc genhtml_function_coverage=1 00:34:58.419 --rc genhtml_legend=1 00:34:58.419 --rc geninfo_all_blocks=1 00:34:58.419 --rc geninfo_unexecuted_blocks=1 00:34:58.419 00:34:58.419 ' 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:58.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.419 --rc genhtml_branch_coverage=1 00:34:58.419 --rc genhtml_function_coverage=1 00:34:58.419 --rc genhtml_legend=1 00:34:58.419 --rc geninfo_all_blocks=1 00:34:58.419 --rc geninfo_unexecuted_blocks=1 00:34:58.419 00:34:58.419 ' 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:58.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.419 --rc genhtml_branch_coverage=1 00:34:58.419 --rc genhtml_function_coverage=1 00:34:58.419 --rc genhtml_legend=1 00:34:58.419 --rc geninfo_all_blocks=1 00:34:58.419 --rc geninfo_unexecuted_blocks=1 00:34:58.419 00:34:58.419 ' 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:58.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.419 --rc genhtml_branch_coverage=1 00:34:58.419 --rc genhtml_function_coverage=1 00:34:58.419 --rc genhtml_legend=1 00:34:58.419 --rc geninfo_all_blocks=1 00:34:58.419 --rc geninfo_unexecuted_blocks=1 00:34:58.419 00:34:58.419 ' 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:58.419 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:58.420 14:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:04.988 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:04.988 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:04.988 Found net devices under 0000:86:00.0: cvl_0_0 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:04.988 Found net devices under 0000:86:00.1: cvl_0_1 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:04.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:04.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:35:04.988 00:35:04.988 --- 10.0.0.2 ping statistics --- 00:35:04.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.988 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:04.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:04.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:35:04.988 00:35:04.988 --- 10.0.0.1 ping statistics --- 00:35:04.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.988 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:35:04.988 only one NIC for nvmf test 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:04.988 rmmod nvme_tcp 00:35:04.988 rmmod nvme_fabrics 00:35:04.988 rmmod nvme_keyring 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:04.988 14:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:06.365 00:35:06.365 real 0m8.295s 00:35:06.365 user 0m1.839s 00:35:06.365 sys 0m4.473s 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:06.365 ************************************ 00:35:06.365 END TEST nvmf_target_multipath 00:35:06.365 ************************************ 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:06.365 ************************************ 00:35:06.365 START TEST nvmf_zcopy 00:35:06.365 ************************************ 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:06.365 * Looking for test storage... 00:35:06.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:35:06.365 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:35:06.625 14:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:35:06.625 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:35:06.625 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:35:06.625 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:06.625 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:35:06.625 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:35:06.625 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:06.625 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:06.625 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:35:06.625 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:06.625 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.625 --rc genhtml_branch_coverage=1 00:35:06.625 --rc genhtml_function_coverage=1 00:35:06.625 --rc genhtml_legend=1 00:35:06.625 --rc geninfo_all_blocks=1 00:35:06.625 --rc geninfo_unexecuted_blocks=1 00:35:06.625 00:35:06.625 ' 00:35:06.625 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.625 --rc genhtml_branch_coverage=1 00:35:06.625 --rc genhtml_function_coverage=1 00:35:06.625 --rc genhtml_legend=1 00:35:06.625 --rc geninfo_all_blocks=1 00:35:06.625 --rc geninfo_unexecuted_blocks=1 00:35:06.625 00:35:06.625 ' 00:35:06.625 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.625 --rc genhtml_branch_coverage=1 00:35:06.625 --rc genhtml_function_coverage=1 00:35:06.625 --rc genhtml_legend=1 00:35:06.625 --rc geninfo_all_blocks=1 00:35:06.625 --rc geninfo_unexecuted_blocks=1 00:35:06.625 00:35:06.625 ' 00:35:06.625 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.625 --rc genhtml_branch_coverage=1 00:35:06.625 --rc genhtml_function_coverage=1 00:35:06.625 --rc genhtml_legend=1 00:35:06.625 --rc geninfo_all_blocks=1 00:35:06.625 --rc geninfo_unexecuted_blocks=1 00:35:06.625 00:35:06.626 ' 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:35:06.626 14:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:13.194 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:13.194 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:13.194 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:13.195 Found net devices under 0000:86:00.0: cvl_0_0 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:13.195 Found net devices under 0000:86:00.1: cvl_0_1 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:13.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:13.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:35:13.195 00:35:13.195 --- 10.0.0.2 ping statistics --- 00:35:13.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.195 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:13.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:13.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:35:13.195 00:35:13.195 --- 10.0.0.1 ping statistics --- 00:35:13.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.195 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=883738 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 883738 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 883738 ']' 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.195 14:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:13.195 [2024-12-05 14:06:54.979833] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:13.195 [2024-12-05 14:06:54.980789] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:35:13.195 [2024-12-05 14:06:54.980825] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:13.195 [2024-12-05 14:06:55.060022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.195 [2024-12-05 14:06:55.100365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:13.195 [2024-12-05 14:06:55.100402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:13.195 [2024-12-05 14:06:55.100409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:13.195 [2024-12-05 14:06:55.100415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:13.195 [2024-12-05 14:06:55.100423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:13.196 [2024-12-05 14:06:55.100977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.196 [2024-12-05 14:06:55.168320] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:13.196 [2024-12-05 14:06:55.168515] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:13.196 [2024-12-05 14:06:55.233746] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:13.196 [2024-12-05 14:06:55.261936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:13.196 malloc0 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:13.196 { 00:35:13.196 "params": { 00:35:13.196 "name": "Nvme$subsystem", 00:35:13.196 "trtype": "$TEST_TRANSPORT", 00:35:13.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.196 "adrfam": "ipv4", 00:35:13.196 "trsvcid": "$NVMF_PORT", 00:35:13.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.196 "hdgst": ${hdgst:-false}, 00:35:13.196 "ddgst": ${ddgst:-false} 00:35:13.196 }, 00:35:13.196 "method": "bdev_nvme_attach_controller" 00:35:13.196 } 00:35:13.196 EOF 00:35:13.196 )") 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:13.196 14:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:13.196 "params": { 00:35:13.196 "name": "Nvme1", 00:35:13.196 "trtype": "tcp", 00:35:13.196 "traddr": "10.0.0.2", 00:35:13.196 "adrfam": "ipv4", 00:35:13.196 "trsvcid": "4420", 00:35:13.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:13.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:13.196 "hdgst": false, 00:35:13.196 "ddgst": false 00:35:13.196 }, 00:35:13.196 "method": "bdev_nvme_attach_controller" 00:35:13.196 }' 00:35:13.196 [2024-12-05 14:06:55.357311] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:35:13.196 [2024-12-05 14:06:55.357391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883758 ] 00:35:13.196 [2024-12-05 14:06:55.431147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.196 [2024-12-05 14:06:55.472269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.196 Running I/O for 10 seconds... 00:35:15.512 8676.00 IOPS, 67.78 MiB/s [2024-12-05T13:06:59.035Z] 8676.00 IOPS, 67.78 MiB/s [2024-12-05T13:06:59.972Z] 8674.67 IOPS, 67.77 MiB/s [2024-12-05T13:07:00.907Z] 8676.50 IOPS, 67.79 MiB/s [2024-12-05T13:07:01.843Z] 8651.60 IOPS, 67.59 MiB/s [2024-12-05T13:07:02.779Z] 8655.50 IOPS, 67.62 MiB/s [2024-12-05T13:07:04.152Z] 8658.29 IOPS, 67.64 MiB/s [2024-12-05T13:07:05.086Z] 8667.25 IOPS, 67.71 MiB/s [2024-12-05T13:07:06.022Z] 8663.78 IOPS, 67.69 MiB/s [2024-12-05T13:07:06.022Z] 8663.70 IOPS, 67.69 MiB/s 00:35:23.435 Latency(us) 00:35:23.435 [2024-12-05T13:07:06.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.435 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:23.435 Verification LBA range: start 0x0 length 0x1000 00:35:23.435 Nvme1n1 : 10.01 8667.82 67.72 0.00 0.00 14725.67 1997.29 21096.35 00:35:23.435 [2024-12-05T13:07:06.022Z] =================================================================================================================== 00:35:23.435 [2024-12-05T13:07:06.022Z] Total : 8667.82 67.72 0.00 0.00 14725.67 1997.29 21096.35 00:35:23.435 14:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=885371 00:35:23.435 14:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:23.435 14:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:23.435 14:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:23.435 14:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:23.435 14:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:23.435 14:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:23.435 14:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:23.435 14:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:23.435 { 00:35:23.435 "params": { 00:35:23.435 "name": "Nvme$subsystem", 00:35:23.435 "trtype": "$TEST_TRANSPORT", 00:35:23.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.435 "adrfam": "ipv4", 00:35:23.435 "trsvcid": "$NVMF_PORT", 00:35:23.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.435 "hdgst": ${hdgst:-false}, 00:35:23.435 "ddgst": ${ddgst:-false} 00:35:23.435 }, 00:35:23.435 "method": "bdev_nvme_attach_controller" 00:35:23.435 } 00:35:23.435 EOF 00:35:23.435 )") 00:35:23.435 [2024-12-05 14:07:05.945313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.435 [2024-12-05 14:07:05.945344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.435 14:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:23.435 14:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:23.435 14:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:23.435 14:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:23.435 "params": { 00:35:23.435 "name": "Nvme1", 00:35:23.435 "trtype": "tcp", 00:35:23.435 "traddr": "10.0.0.2", 00:35:23.435 "adrfam": "ipv4", 00:35:23.435 "trsvcid": "4420", 00:35:23.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:23.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:23.435 "hdgst": false, 00:35:23.435 "ddgst": false 00:35:23.435 }, 00:35:23.435 "method": "bdev_nvme_attach_controller" 00:35:23.435 }' 00:35:23.435 [2024-12-05 14:07:05.957278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.435 [2024-12-05 14:07:05.957291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.435 [2024-12-05 14:07:05.969277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.435 [2024-12-05 14:07:05.969289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.435 [2024-12-05 14:07:05.981276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.435 [2024-12-05 14:07:05.981287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.435 [2024-12-05 14:07:05.988301] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:35:23.435 [2024-12-05 14:07:05.988341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885371 ] 00:35:23.435 [2024-12-05 14:07:05.993276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.435 [2024-12-05 14:07:05.993288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.435 [2024-12-05 14:07:06.005274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.435 [2024-12-05 14:07:06.005284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.435 [2024-12-05 14:07:06.017274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.435 [2024-12-05 14:07:06.017283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.029274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.029288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.041273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.041283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.053274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.053283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.062710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.694 [2024-12-05 14:07:06.065276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.065285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.077278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.077292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.089275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.089286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.101274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.101286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.105199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:23.694 [2024-12-05 14:07:06.113275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.113285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.125291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.125314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.137280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.137294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.149278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.149291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.161278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.161291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.173277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.173290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.185275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.185284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.197289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.197309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.209280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.209293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.221279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.221292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.233275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.233285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.245275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.245290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.257275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.257287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.694 [2024-12-05 14:07:06.269280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.694 [2024-12-05 14:07:06.269293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.281275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.281285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.293274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.293283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.305274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.305283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.317278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.317291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.329275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.329284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.341274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.341284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.353275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.353285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.365276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.365287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.377285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.377295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.389275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.389284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.401276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.401286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.413280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.413297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 Running I/O for 5 seconds... 00:35:23.953 [2024-12-05 14:07:06.429154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.429174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.440316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.440334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.455085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.455103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.470006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.470024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.484917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.484939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.499142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.499159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.513646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.513663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:23.953 [2024-12-05 14:07:06.529162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:23.953 [2024-12-05 14:07:06.529180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.212 [2024-12-05 14:07:06.542112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.212 [2024-12-05 14:07:06.542131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.212 [2024-12-05 14:07:06.556791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.212 [2024-12-05 14:07:06.556809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.212 [2024-12-05 14:07:06.569950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.212 [2024-12-05 14:07:06.569967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.212 [2024-12-05 14:07:06.582969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.212 [2024-12-05 14:07:06.582987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.597696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.597714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.612776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.612794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.626859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.626876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.641789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.641807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.657421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.657439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.670714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.670732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.685423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.685441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.696442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.696459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.711180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.711198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.726107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.726125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.740720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.740738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.754163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.754185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.769364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.769386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.782297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.782314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.213 [2024-12-05 14:07:06.797607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.213 [2024-12-05 14:07:06.797624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.813333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.813351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.826999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.827017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.841767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.841784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.856919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.856937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.869859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.869877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.882677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.882694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.897132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.897150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.910776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.910794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.925354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.925378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.936355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.936379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.951015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.951032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.965809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.965828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.981572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.981591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:06.997163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:06.997181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:07.009640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:07.009658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:07.022745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:07.022762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:07.037384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:07.037402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:07.048171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:07.048188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.498 [2024-12-05 14:07:07.063072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.498 [2024-12-05 14:07:07.063091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.821 [2024-12-05 14:07:07.077990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.821 [2024-12-05 14:07:07.078014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.092751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.092771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.106074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.106093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.117207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.117226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.131152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.131169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.145727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.145744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.160598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.160615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.174891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.174909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.189067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.189084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.203340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.203357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.217529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.217547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.228001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.228019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.243352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.243377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.258136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.258154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.272994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.273012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.286871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.286889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.301180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.301197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.313688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.313705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.327375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.327392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.341725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.341742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.354048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.354064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.367359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.367386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.822 [2024-12-05 14:07:07.382534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:24.822 [2024-12-05 14:07:07.382552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.397053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.397073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.411020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.411039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 16805.00 IOPS, 131.29 MiB/s [2024-12-05T13:07:07.693Z] [2024-12-05 14:07:07.425986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.426005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.438020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.438037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.452963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.452982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.466103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.466122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.481678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.481695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.497631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.497649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.511091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.511109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.525439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.525457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.536481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.536499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.551113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.551132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.565757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.565775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.581244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.581262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.595398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.595417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.609962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.609987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.624804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.624822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.639246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.639264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.653381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.653398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.664548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.664566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.106 [2024-12-05 14:07:07.679177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.106 [2024-12-05 14:07:07.679195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.693793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.693811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.708867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.708885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.722843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.722861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.737212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.737231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.750988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.751006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.765189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.765207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.777605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.777622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.790740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.790758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.805597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.805620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.821144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.821162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.835001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.835019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.849518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.849536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.862213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.862230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.876725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.876743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.891192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.891210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.905844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.905862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.918529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.918545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.929575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.929592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.365 [2024-12-05 14:07:07.943273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.365 [2024-12-05 14:07:07.943291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:07.957765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:07.957782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:07.973073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:07.973091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:07.985169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:07.985187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:07.999135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:07.999155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.013681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.013699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.028936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.028954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.042517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.042535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.053795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.053812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.066698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.066721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.081105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.081123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.093884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.093901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.108669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.108687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.122024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.122042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.137284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.137302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.150742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.150760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.165302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.165320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.179305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.179323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.193634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.193651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.624 [2024-12-05 14:07:08.209452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.624 [2024-12-05 14:07:08.209471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.222659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.222677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.237006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.237024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.251409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.251427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.266213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.266231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.281876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.281894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.297256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.297274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.310039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.310055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.324823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.324841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.339560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.339582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.354393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.354411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.369037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.369055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.383350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.383376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.397695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.397712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.413427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.413444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 16893.50 IOPS, 131.98 MiB/s [2024-12-05T13:07:08.470Z] [2024-12-05 14:07:08.426419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.426436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.437599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.437615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.451433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.451450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:25.883 [2024-12-05 14:07:08.466018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:25.883 [2024-12-05 14:07:08.466036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.141 [2024-12-05 14:07:08.481180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.141 [2024-12-05 14:07:08.481198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.495325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.495342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.510355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.510381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.525163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.525181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.539235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.539253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.553791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.553809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.569269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.569287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.581296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.581314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.594999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.595016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.609374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.609391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.622259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.622276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.636900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.636917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.649882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.649899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.662566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.662583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.677397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.677415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.690958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.690975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.705710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.705726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.142 [2024-12-05 14:07:08.721360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.142 [2024-12-05 14:07:08.721382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.400 [2024-12-05 14:07:08.734919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.400 [2024-12-05 14:07:08.734937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.400 [2024-12-05 14:07:08.749845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.400 [2024-12-05 14:07:08.749862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.400 [2024-12-05 14:07:08.764937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.400 [2024-12-05 14:07:08.764954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.400 [2024-12-05 14:07:08.779167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.779185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.793598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.793615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.809193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.809214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.822174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.822191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.837051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.837070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.851232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.851251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.866151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.866170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.880815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.880833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.893343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.893361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.906931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.906950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.921301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.921320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.932573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.932591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.946976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.946994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.961684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.961701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.401 [2024-12-05 14:07:08.972888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.401 [2024-12-05 14:07:08.972906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:08.987468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:08.987485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.002012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:09.002030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.017274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:09.017293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.030581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:09.030600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.042635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:09.042653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.057525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:09.057543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.067996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:09.068014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.082604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:09.082621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.097089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:09.097107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.110993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:09.111011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.125233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:09.125251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.138830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:09.138849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.153593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:09.153611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.168739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.660 [2024-12-05 14:07:09.168757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.660 [2024-12-05 14:07:09.182909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.661 [2024-12-05 14:07:09.182927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.661 [2024-12-05 14:07:09.197224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.661 [2024-12-05 14:07:09.197242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.661 [2024-12-05 14:07:09.210910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.661 [2024-12-05 14:07:09.210927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.661 [2024-12-05 14:07:09.224979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.661 [2024-12-05 14:07:09.224997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.661 [2024-12-05 14:07:09.238788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.661 [2024-12-05 14:07:09.238806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.253388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.253422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.266084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.266101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.281692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.281710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.297479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.297497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.309596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.309613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.324709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.324726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.339061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.339078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.353635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.353652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.369192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.369210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.383017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.383036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.397511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.397530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.408451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.408468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.422968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.422985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 16909.00 IOPS, 132.10 MiB/s [2024-12-05T13:07:09.506Z] [2024-12-05 14:07:09.437437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.437455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.449746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.449763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.462890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.462907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.477623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.477639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.488838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.488855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.919 [2024-12-05 14:07:09.503400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:26.919 [2024-12-05 14:07:09.503418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.517809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.517826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.532945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.532963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.546234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.546250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.561164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.561182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.573619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.573636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.586976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.586994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.601816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.601833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.617075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.617093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.629890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.629908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.644384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.644402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.658716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.658739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.673388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.673405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.684551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.684568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.699118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.699135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.713724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.713741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.729355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.729379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.742154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.742171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.178 [2024-12-05 14:07:09.754660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.178 [2024-12-05 14:07:09.754678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.437 [2024-12-05 14:07:09.769490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.437 [2024-12-05 14:07:09.769508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.437 [2024-12-05 14:07:09.782358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.437 [2024-12-05 14:07:09.782381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.437 [2024-12-05 14:07:09.796828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.437 [2024-12-05 14:07:09.796845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.437 [2024-12-05 14:07:09.810574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.437 [2024-12-05 14:07:09.810592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.437 [2024-12-05 14:07:09.825132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.437 [2024-12-05 14:07:09.825149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.437 [2024-12-05 14:07:09.839232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.437 [2024-12-05 14:07:09.839250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.437 [2024-12-05 14:07:09.853652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.437 [2024-12-05 14:07:09.853669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.437 [2024-12-05 14:07:09.869804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.437 [2024-12-05 14:07:09.869821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.437 [2024-12-05 14:07:09.884882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.437 [2024-12-05 14:07:09.884899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.438 [2024-12-05 14:07:09.898767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.438 [2024-12-05 14:07:09.898784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.438 [2024-12-05 14:07:09.913101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.438 [2024-12-05 14:07:09.913118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.438 [2024-12-05 14:07:09.926743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.438 [2024-12-05 14:07:09.926766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.438 [2024-12-05 14:07:09.940729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.438 [2024-12-05 14:07:09.940747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.438 [2024-12-05 14:07:09.954827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.438 [2024-12-05 14:07:09.954844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.438 [2024-12-05 14:07:09.969747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.438 [2024-12-05 14:07:09.969764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.438 [2024-12-05 14:07:09.985668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.438 [2024-12-05 14:07:09.985685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.438 [2024-12-05 14:07:09.997826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.438 [2024-12-05 14:07:09.997842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.438 [2024-12-05 14:07:10.013281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.438 [2024-12-05 14:07:10.013299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.438 [2024-12-05 14:07:10.023698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.438 [2024-12-05 14:07:10.023715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.039141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.039160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.054026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.054045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.071023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.071081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.085930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.085948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.101291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.101309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.115009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.115027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.129854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.129872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.145443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.145461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.158456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.158474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.169700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.169718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.183408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.183426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.198446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.198472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.213486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.213504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.224572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.696 [2024-12-05 14:07:10.224600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.696 [2024-12-05 14:07:10.239480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.697 [2024-12-05 14:07:10.239498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.697 [2024-12-05 14:07:10.254374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.697 [2024-12-05 14:07:10.254393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.697 [2024-12-05 14:07:10.269133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.697 [2024-12-05 14:07:10.269151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.697 [2024-12-05 14:07:10.280356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.697 [2024-12-05 14:07:10.280382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.955 [2024-12-05 14:07:10.295363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.955 [2024-12-05 14:07:10.295391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.955 [2024-12-05 14:07:10.309773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.955 [2024-12-05 14:07:10.309792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.955 [2024-12-05 14:07:10.325014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.955 [2024-12-05 14:07:10.325032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.955 [2024-12-05 14:07:10.339176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.955 [2024-12-05 14:07:10.339194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.955 [2024-12-05 14:07:10.354038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.955 [2024-12-05 14:07:10.354056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.955 [2024-12-05 14:07:10.369046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.955 [2024-12-05 14:07:10.369065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.955 [2024-12-05 14:07:10.382055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.955 [2024-12-05 14:07:10.382073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.955 [2024-12-05 14:07:10.397837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.955 [2024-12-05 14:07:10.397856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.955 [2024-12-05 14:07:10.413330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.956 [2024-12-05 14:07:10.413348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.956 [2024-12-05 14:07:10.426187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.956 [2024-12-05 14:07:10.426205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.956 16882.75 IOPS, 131.90 MiB/s [2024-12-05T13:07:10.543Z] [2024-12-05 14:07:10.441133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.956 [2024-12-05 14:07:10.441152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.956 [2024-12-05 14:07:10.455504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.956 [2024-12-05 14:07:10.455523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.956 [2024-12-05 14:07:10.469792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.956 [2024-12-05 14:07:10.469809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.956 [2024-12-05 14:07:10.485444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.956 [2024-12-05 14:07:10.485463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.956 [2024-12-05 14:07:10.498249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.956 [2024-12-05 14:07:10.498267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.956 [2024-12-05 14:07:10.513629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.956 [2024-12-05 14:07:10.513648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.956 [2024-12-05 14:07:10.529468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.956 [2024-12-05 14:07:10.529486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:27.956 [2024-12-05 14:07:10.541394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:27.956 [2024-12-05 14:07:10.541412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.555455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.555473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.570346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.570363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.585594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.585612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.601347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.601372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.614492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.614510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.629595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.629612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.645539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.645557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.657865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.657883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.670754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.670772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.685747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.685764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.701236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.701253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.714187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.714206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.729030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.729049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.743142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.743159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.758048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.758066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.773258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.773276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.787474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.787491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.218 [2024-12-05 14:07:10.801985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.218 [2024-12-05 14:07:10.802002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:10.817071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:10.817088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:10.831378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:10.831396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:10.846479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:10.846496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:10.861287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:10.861305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:10.874343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:10.874361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:10.888838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:10.888855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:10.902452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:10.902470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:10.917067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:10.917085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:10.930951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:10.930969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:10.944982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:10.945000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:10.958654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:10.958671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:10.973558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:10.973575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:10.989066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:10.989084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:11.003374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:11.003396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:11.017822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:11.017840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:11.033396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:11.033414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:11.046469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:11.046486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.477 [2024-12-05 14:07:11.061042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.477 [2024-12-05 14:07:11.061063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.736 [2024-12-05 14:07:11.075160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.736 [2024-12-05 14:07:11.075179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.736 [2024-12-05 14:07:11.089950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.736 [2024-12-05 14:07:11.089968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.736 [2024-12-05 14:07:11.104422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.736 [2024-12-05 14:07:11.104440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.736 [2024-12-05 14:07:11.119237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.736 [2024-12-05 14:07:11.119255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.736 [2024-12-05 14:07:11.133422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.736 [2024-12-05 14:07:11.133439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.736 [2024-12-05 14:07:11.146908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.736 [2024-12-05 14:07:11.146941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.736 [2024-12-05 14:07:11.161686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.736 [2024-12-05 14:07:11.161702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.737 [2024-12-05 14:07:11.176950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.737 [2024-12-05 14:07:11.176968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.737 [2024-12-05 14:07:11.191132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.737 [2024-12-05 14:07:11.191149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.737 [2024-12-05 14:07:11.205640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.737 [2024-12-05 14:07:11.205657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.737 [2024-12-05 14:07:11.220644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.737 [2024-12-05 14:07:11.220662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.737 [2024-12-05 14:07:11.234506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.737 [2024-12-05 14:07:11.234523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.737 [2024-12-05 14:07:11.249590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.737 [2024-12-05 14:07:11.249607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.737 [2024-12-05 14:07:11.264908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.737 [2024-12-05 14:07:11.264925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.737 [2024-12-05 14:07:11.278962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.737 [2024-12-05 14:07:11.278984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.737 [2024-12-05 14:07:11.293816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.737 [2024-12-05 14:07:11.293833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.737 [2024-12-05 14:07:11.309622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.737 [2024-12-05 14:07:11.309640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.324296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.324314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.338799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.338817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.353248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.353265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.367194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.367212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.382330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.382347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.397464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.397482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.411002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.411019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.425780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.425797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 16873.20 IOPS, 131.82 MiB/s [2024-12-05T13:07:11.584Z] [2024-12-05 14:07:11.439392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.439409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 00:35:28.997 Latency(us) 00:35:28.997 [2024-12-05T13:07:11.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.997 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:28.997 Nvme1n1 : 5.01 16872.82 131.82 0.00 0.00 7578.75 2153.33 13981.01 00:35:28.997 [2024-12-05T13:07:11.584Z] =================================================================================================================== 00:35:28.997 [2024-12-05T13:07:11.584Z] Total : 16872.82 131.82 0.00 0.00 7578.75 2153.33 13981.01 00:35:28.997 [2024-12-05 14:07:11.449280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.449296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.461279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.461291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.473289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.473307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.485283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.485296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.497282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.497303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.509277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.509290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.521278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.521292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.533279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.533293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.545278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.545292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.557274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.557283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.569276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.569286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:28.997 [2024-12-05 14:07:11.581279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:28.997 [2024-12-05 14:07:11.581290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:29.256 [2024-12-05 14:07:11.593274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:29.256 [2024-12-05 14:07:11.593283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:29.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (885371) - No such process 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 885371 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:29.256 delay0 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.256 14:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:35:29.256 [2024-12-05 14:07:11.738086] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:37.375 [2024-12-05 14:07:18.569782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b72690 is same with the state(6) to be set 00:35:37.375 Initializing NVMe Controllers 00:35:37.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:37.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:37.375 Initialization complete. Launching workers. 00:35:37.375 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4273 00:35:37.375 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 4559, failed to submit 34 00:35:37.375 success 4414, unsuccessful 145, failed 0 00:35:37.375 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:37.375 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:37.375 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:37.375 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:37.375 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:37.375 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:37.375 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:37.376 rmmod nvme_tcp 00:35:37.376 rmmod nvme_fabrics 00:35:37.376 rmmod nvme_keyring 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 883738 ']' 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 883738 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 883738 ']' 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 883738 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 883738 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 883738' 00:35:37.376 killing process with pid 883738 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 883738 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 883738 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:37.376 14:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.755 14:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:38.755 00:35:38.755 real 0m32.101s 00:35:38.755 user 0m41.572s 00:35:38.755 sys 0m12.834s 00:35:38.755 14:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:38.755 14:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:38.755 ************************************ 00:35:38.755 END TEST nvmf_zcopy 00:35:38.755 ************************************ 00:35:38.755 14:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:38.755 14:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:38.756 14:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:38.756 14:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:38.756 ************************************ 00:35:38.756 START TEST nvmf_nmic 00:35:38.756 ************************************ 00:35:38.756 14:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:38.756 * Looking for test storage... 00:35:38.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:38.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.756 --rc genhtml_branch_coverage=1 00:35:38.756 --rc genhtml_function_coverage=1 00:35:38.756 --rc genhtml_legend=1 00:35:38.756 --rc geninfo_all_blocks=1 00:35:38.756 --rc geninfo_unexecuted_blocks=1 00:35:38.756 00:35:38.756 ' 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:38.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.756 --rc genhtml_branch_coverage=1 00:35:38.756 --rc genhtml_function_coverage=1 00:35:38.756 --rc genhtml_legend=1 00:35:38.756 --rc geninfo_all_blocks=1 00:35:38.756 --rc geninfo_unexecuted_blocks=1 00:35:38.756 00:35:38.756 ' 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:38.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.756 --rc genhtml_branch_coverage=1 00:35:38.756 --rc genhtml_function_coverage=1 00:35:38.756 --rc genhtml_legend=1 00:35:38.756 --rc geninfo_all_blocks=1 00:35:38.756 --rc geninfo_unexecuted_blocks=1 00:35:38.756 00:35:38.756 ' 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:38.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.756 --rc genhtml_branch_coverage=1 00:35:38.756 --rc genhtml_function_coverage=1 00:35:38.756 --rc genhtml_legend=1 00:35:38.756 --rc geninfo_all_blocks=1 00:35:38.756 --rc geninfo_unexecuted_blocks=1 00:35:38.756 00:35:38.756 ' 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:38.756 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:38.757 14:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:45.324 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.324 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:45.325 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:45.325 Found net devices under 0000:86:00.0: cvl_0_0 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:45.325 Found net devices under 0000:86:00.1: cvl_0_1 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:45.325 14:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:35:45.325 00:35:45.325 --- 10.0.0.2 ping statistics --- 00:35:45.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.325 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:35:45.325 00:35:45.325 --- 10.0.0.1 ping statistics --- 00:35:45.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.325 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=890944 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 890944 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 890944 ']' 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.325 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.325 [2024-12-05 14:07:27.169036] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:45.325 [2024-12-05 14:07:27.169991] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:35:45.325 [2024-12-05 14:07:27.170027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.325 [2024-12-05 14:07:27.250322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:45.325 [2024-12-05 14:07:27.293641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.325 [2024-12-05 14:07:27.293677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.325 [2024-12-05 14:07:27.293684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.325 [2024-12-05 14:07:27.293690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.325 [2024-12-05 14:07:27.293695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.325 [2024-12-05 14:07:27.295228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.326 [2024-12-05 14:07:27.295341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:45.326 [2024-12-05 14:07:27.295449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.326 [2024-12-05 14:07:27.295450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:45.326 [2024-12-05 14:07:27.364911] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:45.326 [2024-12-05 14:07:27.365111] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:45.326 [2024-12-05 14:07:27.365682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:45.326 [2024-12-05 14:07:27.365868] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:45.326 [2024-12-05 14:07:27.365928] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.326 [2024-12-05 14:07:27.432122] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.326 Malloc0 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.326 [2024-12-05 14:07:27.508380] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:45.326 test case1: single bdev can't be used in multiple subsystems 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.326 [2024-12-05 14:07:27.539816] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:45.326 [2024-12-05 14:07:27.539835] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:45.326 [2024-12-05 14:07:27.539843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.326 request: 00:35:45.326 { 00:35:45.326 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:45.326 "namespace": { 00:35:45.326 "bdev_name": "Malloc0", 00:35:45.326 "no_auto_visible": false, 00:35:45.326 "hide_metadata": false 00:35:45.326 }, 00:35:45.326 "method": "nvmf_subsystem_add_ns", 00:35:45.326 "req_id": 1 00:35:45.326 } 00:35:45.326 Got JSON-RPC error response 00:35:45.326 response: 00:35:45.326 { 00:35:45.326 "code": -32602, 00:35:45.326 "message": "Invalid parameters" 00:35:45.326 } 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:45.326 Adding namespace failed - expected result. 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:45.326 test case2: host connect to nvmf target in multiple paths 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:45.326 [2024-12-05 14:07:27.551910] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:45.326 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:35:45.585 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:45.585 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:35:45.585 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:45.585 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:45.585 14:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:35:47.488 14:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:47.488 14:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:47.488 14:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:47.488 14:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:47.488 14:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:47.488 14:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:35:47.488 14:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:47.488 [global] 00:35:47.488 thread=1 00:35:47.488 invalidate=1 00:35:47.488 rw=write 00:35:47.488 time_based=1 00:35:47.488 runtime=1 00:35:47.488 ioengine=libaio 00:35:47.488 direct=1 00:35:47.488 bs=4096 00:35:47.488 iodepth=1 00:35:47.488 norandommap=0 00:35:47.488 numjobs=1 00:35:47.488 00:35:47.488 verify_dump=1 00:35:47.488 verify_backlog=512 00:35:47.488 verify_state_save=0 00:35:47.488 do_verify=1 00:35:47.488 verify=crc32c-intel 00:35:47.488 [job0] 00:35:47.489 filename=/dev/nvme0n1 00:35:47.489 Could not set queue depth (nvme0n1) 00:35:47.747 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:47.747 fio-3.35 00:35:47.747 Starting 1 thread 00:35:49.126 00:35:49.127 job0: (groupid=0, jobs=1): err= 0: pid=891560: Thu Dec 5 14:07:31 2024 00:35:49.127 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:35:49.127 slat (nsec): min=9835, max=24460, avg=21887.32, stdev=2769.64 00:35:49.127 clat (usec): min=40874, max=41078, avg=40968.99, stdev=52.50 00:35:49.127 lat (usec): min=40896, max=41100, avg=40990.88, stdev=52.91 00:35:49.127 clat percentiles (usec): 00:35:49.127 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:49.127 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:49.127 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:49.127 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:49.127 | 99.99th=[41157] 00:35:49.127 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:35:49.127 slat (nsec): min=9191, max=40046, avg=10304.50, stdev=1618.42 00:35:49.127 clat (usec): min=212, max=421, avg=243.21, stdev= 9.24 00:35:49.127 lat (usec): min=224, max=461, avg=253.51, stdev=10.32 00:35:49.127 clat percentiles (usec): 00:35:49.127 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 241], 00:35:49.127 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 243], 60.00th=[ 243], 00:35:49.127 | 70.00th=[ 245], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 249], 00:35:49.127 | 99.00th=[ 260], 99.50th=[ 277], 99.90th=[ 420], 99.95th=[ 420], 00:35:49.127 | 99.99th=[ 420] 00:35:49.127 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:35:49.127 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:49.127 lat (usec) : 250=93.82%, 500=2.06% 00:35:49.127 lat (msec) : 50=4.12% 00:35:49.127 cpu : usr=0.29%, sys=0.39%, ctx=534, majf=0, minf=1 00:35:49.127 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.127 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.127 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:49.127 00:35:49.127 Run status group 0 (all jobs): 00:35:49.127 READ: bw=85.2KiB/s (87.2kB/s), 85.2KiB/s-85.2KiB/s (87.2kB/s-87.2kB/s), io=88.0KiB (90.1kB), run=1033-1033msec 00:35:49.127 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:35:49.127 00:35:49.127 Disk stats (read/write): 00:35:49.127 nvme0n1: ios=68/512, merge=0/0, ticks=755/122, in_queue=877, util=91.08% 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:49.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:49.127 rmmod nvme_tcp 00:35:49.127 rmmod nvme_fabrics 00:35:49.127 rmmod nvme_keyring 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 890944 ']' 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 890944 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 890944 ']' 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 890944 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 890944 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 890944' 00:35:49.127 killing process with pid 890944 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 890944 00:35:49.127 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 890944 00:35:49.386 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:49.386 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:49.386 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:49.386 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:49.386 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:35:49.386 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:49.386 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:35:49.386 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:49.386 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:49.386 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.386 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:49.386 14:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.925 14:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:51.925 00:35:51.925 real 0m12.952s 00:35:51.925 user 0m23.484s 00:35:51.925 sys 0m6.028s 00:35:51.925 14:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.925 14:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:51.925 ************************************ 00:35:51.925 END TEST nvmf_nmic 00:35:51.925 ************************************ 00:35:51.925 14:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:51.925 14:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:51.925 14:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:51.925 14:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:51.925 ************************************ 00:35:51.925 START TEST nvmf_fio_target 00:35:51.925 ************************************ 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:51.925 * Looking for test storage... 00:35:51.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:51.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.925 --rc genhtml_branch_coverage=1 00:35:51.925 --rc genhtml_function_coverage=1 00:35:51.925 --rc genhtml_legend=1 00:35:51.925 --rc geninfo_all_blocks=1 00:35:51.925 --rc geninfo_unexecuted_blocks=1 00:35:51.925 00:35:51.925 ' 00:35:51.925 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:51.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.925 --rc genhtml_branch_coverage=1 00:35:51.925 --rc genhtml_function_coverage=1 00:35:51.925 --rc genhtml_legend=1 00:35:51.925 --rc geninfo_all_blocks=1 00:35:51.925 --rc geninfo_unexecuted_blocks=1 00:35:51.925 00:35:51.925 ' 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:51.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.926 --rc genhtml_branch_coverage=1 00:35:51.926 --rc genhtml_function_coverage=1 00:35:51.926 --rc genhtml_legend=1 00:35:51.926 --rc geninfo_all_blocks=1 00:35:51.926 --rc geninfo_unexecuted_blocks=1 00:35:51.926 00:35:51.926 ' 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:51.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.926 --rc genhtml_branch_coverage=1 00:35:51.926 --rc genhtml_function_coverage=1 00:35:51.926 --rc genhtml_legend=1 00:35:51.926 --rc geninfo_all_blocks=1 00:35:51.926 --rc geninfo_unexecuted_blocks=1 00:35:51.926 00:35:51.926 ' 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:51.926 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:51.927 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.927 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:51.927 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:51.927 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:51.927 14:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:58.496 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:58.496 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:58.496 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:58.496 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:58.496 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:58.496 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:58.496 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:58.496 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:58.496 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:58.497 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:58.497 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:58.497 Found net devices under 0000:86:00.0: cvl_0_0 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:58.497 Found net devices under 0000:86:00.1: cvl_0_1 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:58.497 14:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:58.497 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:58.497 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:58.497 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:58.497 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:58.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:58.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:35:58.497 00:35:58.497 --- 10.0.0.2 ping statistics --- 00:35:58.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.497 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:35:58.497 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:58.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:58.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:35:58.497 00:35:58.497 --- 10.0.0.1 ping statistics --- 00:35:58.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.497 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=895320 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 895320 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 895320 ']' 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:58.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:58.498 [2024-12-05 14:07:40.174918] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:58.498 [2024-12-05 14:07:40.175854] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:35:58.498 [2024-12-05 14:07:40.175888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:58.498 [2024-12-05 14:07:40.255330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:58.498 [2024-12-05 14:07:40.298028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:58.498 [2024-12-05 14:07:40.298063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:58.498 [2024-12-05 14:07:40.298071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:58.498 [2024-12-05 14:07:40.298078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:58.498 [2024-12-05 14:07:40.298086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:58.498 [2024-12-05 14:07:40.299499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:58.498 [2024-12-05 14:07:40.299609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:58.498 [2024-12-05 14:07:40.299708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:58.498 [2024-12-05 14:07:40.299708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:58.498 [2024-12-05 14:07:40.369923] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:58.498 [2024-12-05 14:07:40.370263] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:58.498 [2024-12-05 14:07:40.370746] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:58.498 [2024-12-05 14:07:40.370932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:58.498 [2024-12-05 14:07:40.370986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:58.498 [2024-12-05 14:07:40.604339] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:58.498 14:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:58.756 14:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:58.756 14:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:58.756 14:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:58.756 14:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:59.014 14:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:59.014 14:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:59.272 14:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:59.531 14:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:59.531 14:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:59.814 14:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:59.814 14:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:59.814 14:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:59.814 14:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:00.072 14:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:00.331 14:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:00.331 14:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:00.331 14:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:00.331 14:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:00.590 14:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:00.849 [2024-12-05 14:07:43.248267] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:00.849 14:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:01.108 14:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:01.108 14:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:01.367 14:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:01.367 14:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:36:01.367 14:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:01.367 14:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:36:01.367 14:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:36:01.367 14:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:36:03.903 14:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:03.903 14:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:03.903 14:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:03.903 14:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:36:03.903 14:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:03.903 14:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:36:03.903 14:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:03.903 [global] 00:36:03.903 thread=1 00:36:03.903 invalidate=1 00:36:03.903 rw=write 00:36:03.903 time_based=1 00:36:03.903 runtime=1 00:36:03.903 ioengine=libaio 00:36:03.903 direct=1 00:36:03.903 bs=4096 00:36:03.903 iodepth=1 00:36:03.903 norandommap=0 00:36:03.903 numjobs=1 00:36:03.903 00:36:03.903 verify_dump=1 00:36:03.903 verify_backlog=512 00:36:03.903 verify_state_save=0 00:36:03.903 do_verify=1 00:36:03.903 verify=crc32c-intel 00:36:03.903 [job0] 00:36:03.903 filename=/dev/nvme0n1 00:36:03.903 [job1] 00:36:03.903 filename=/dev/nvme0n2 00:36:03.903 [job2] 00:36:03.903 filename=/dev/nvme0n3 00:36:03.903 [job3] 00:36:03.903 filename=/dev/nvme0n4 00:36:03.903 Could not set queue depth (nvme0n1) 00:36:03.903 Could not set queue depth (nvme0n2) 00:36:03.903 Could not set queue depth (nvme0n3) 00:36:03.903 Could not set queue depth (nvme0n4) 00:36:03.903 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:03.903 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:03.903 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:03.903 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:03.903 fio-3.35 00:36:03.903 Starting 4 threads 00:36:05.279 00:36:05.279 job0: (groupid=0, jobs=1): err= 0: pid=896437: Thu Dec 5 14:07:47 2024 00:36:05.279 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:36:05.279 slat (nsec): min=5043, max=38546, avg=7556.22, stdev=2375.10 00:36:05.279 clat (usec): min=195, max=41932, avg=750.08, stdev=4435.63 00:36:05.279 lat (usec): min=201, max=41945, avg=757.63, stdev=4436.76 00:36:05.279 clat percentiles (usec): 00:36:05.279 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:36:05.279 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:36:05.279 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 302], 95.00th=[ 347], 00:36:05.279 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:36:05.279 | 99.99th=[41681] 00:36:05.279 write: IOPS=1089, BW=4360KiB/s (4464kB/s)(4364KiB/1001msec); 0 zone resets 00:36:05.279 slat (usec): min=7, max=10372, avg=20.08, stdev=313.74 00:36:05.279 clat (usec): min=126, max=366, avg=180.81, stdev=37.87 00:36:05.279 lat (usec): min=137, max=10714, avg=200.89, stdev=320.87 00:36:05.279 clat percentiles (usec): 00:36:05.279 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:36:05.279 | 30.00th=[ 155], 40.00th=[ 163], 50.00th=[ 172], 60.00th=[ 182], 00:36:05.279 | 70.00th=[ 190], 80.00th=[ 200], 90.00th=[ 243], 95.00th=[ 265], 00:36:05.279 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 359], 99.95th=[ 367], 00:36:05.279 | 99.99th=[ 367] 00:36:05.279 bw ( KiB/s): min= 4096, max= 4096, per=15.39%, avg=4096.00, stdev= 0.00, samples=1 00:36:05.279 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:05.279 lat (usec) : 250=78.16%, 500=20.95%, 750=0.24% 00:36:05.279 lat (msec) : 2=0.05%, 20=0.05%, 50=0.57% 00:36:05.279 cpu : usr=0.80%, sys=2.10%, ctx=2121, majf=0, minf=1 00:36:05.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.279 issued rwts: total=1024,1091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:05.279 job1: (groupid=0, jobs=1): err= 0: pid=896438: Thu Dec 5 14:07:47 2024 00:36:05.279 read: IOPS=2159, BW=8639KiB/s (8847kB/s)(8648KiB/1001msec) 00:36:05.279 slat (nsec): min=5107, max=26301, avg=8267.75, stdev=1413.14 00:36:05.279 clat (usec): min=164, max=435, avg=219.16, stdev=38.49 00:36:05.279 lat (usec): min=172, max=441, avg=227.43, stdev=38.48 00:36:05.279 clat percentiles (usec): 00:36:05.279 | 1.00th=[ 169], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 194], 00:36:05.279 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:36:05.279 | 70.00th=[ 227], 80.00th=[ 247], 90.00th=[ 273], 95.00th=[ 306], 00:36:05.279 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 416], 99.95th=[ 429], 00:36:05.279 | 99.99th=[ 437] 00:36:05.279 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:36:05.279 slat (nsec): min=7184, max=41002, avg=11458.02, stdev=1984.18 00:36:05.279 clat (usec): min=112, max=1588, avg=180.30, stdev=47.79 00:36:05.279 lat (usec): min=119, max=1629, avg=191.76, stdev=48.06 00:36:05.279 clat percentiles (usec): 00:36:05.279 | 1.00th=[ 127], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 151], 00:36:05.279 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 169], 00:36:05.279 | 70.00th=[ 186], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 245], 00:36:05.279 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 318], 99.95th=[ 330], 00:36:05.279 | 99.99th=[ 1582] 00:36:05.279 bw ( KiB/s): min=10464, max=10464, per=39.31%, avg=10464.00, stdev= 0.00, samples=1 00:36:05.279 iops : min= 2616, max= 2616, avg=2616.00, stdev= 0.00, samples=1 00:36:05.279 lat (usec) : 250=90.34%, 500=9.64% 00:36:05.279 lat (msec) : 2=0.02% 00:36:05.279 cpu : usr=4.30%, sys=6.90%, ctx=4724, majf=0, minf=1 00:36:05.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.279 issued rwts: total=2162,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:05.279 job2: (groupid=0, jobs=1): err= 0: pid=896439: Thu Dec 5 14:07:47 2024 00:36:05.279 read: IOPS=1422, BW=5690KiB/s (5827kB/s)(5696KiB/1001msec) 00:36:05.279 slat (nsec): min=7186, max=24937, avg=8646.50, stdev=1498.55 00:36:05.279 clat (usec): min=209, max=41977, avg=480.58, stdev=3055.44 00:36:05.279 lat (usec): min=217, max=41988, avg=489.22, stdev=3055.74 00:36:05.279 clat percentiles (usec): 00:36:05.279 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 233], 00:36:05.279 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:36:05.279 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 293], 00:36:05.279 | 99.00th=[ 498], 99.50th=[40633], 99.90th=[41157], 99.95th=[42206], 00:36:05.279 | 99.99th=[42206] 00:36:05.279 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:36:05.279 slat (nsec): min=10239, max=57201, avg=11864.59, stdev=2277.56 00:36:05.279 clat (usec): min=147, max=470, avg=179.77, stdev=17.94 00:36:05.279 lat (usec): min=161, max=482, avg=191.64, stdev=18.69 00:36:05.279 clat percentiles (usec): 00:36:05.279 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 169], 00:36:05.279 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:36:05.279 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:36:05.279 | 99.00th=[ 235], 99.50th=[ 269], 99.90th=[ 383], 99.95th=[ 469], 00:36:05.279 | 99.99th=[ 469] 00:36:05.279 bw ( KiB/s): min= 4096, max= 4096, per=15.39%, avg=4096.00, stdev= 0.00, samples=1 00:36:05.279 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:05.279 lat (usec) : 250=84.09%, 500=15.44%, 750=0.20% 00:36:05.279 lat (msec) : 50=0.27% 00:36:05.279 cpu : usr=1.80%, sys=5.50%, ctx=2960, majf=0, minf=2 00:36:05.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.279 issued rwts: total=1424,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:05.279 job3: (groupid=0, jobs=1): err= 0: pid=896440: Thu Dec 5 14:07:47 2024 00:36:05.279 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:36:05.279 slat (nsec): min=6574, max=24958, avg=8262.31, stdev=1767.07 00:36:05.279 clat (usec): min=196, max=41245, avg=656.60, stdev=3986.05 00:36:05.279 lat (usec): min=204, max=41260, avg=664.87, stdev=3986.57 00:36:05.279 clat percentiles (usec): 00:36:05.279 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 217], 00:36:05.279 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 247], 00:36:05.279 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 416], 00:36:05.279 | 99.00th=[ 4490], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:05.279 | 99.99th=[41157] 00:36:05.279 write: IOPS=1473, BW=5894KiB/s (6036kB/s)(5900KiB/1001msec); 0 zone resets 00:36:05.279 slat (usec): min=9, max=39050, avg=37.61, stdev=1016.51 00:36:05.279 clat (usec): min=138, max=364, avg=172.37, stdev=21.66 00:36:05.279 lat (usec): min=149, max=39415, avg=209.98, stdev=1021.74 00:36:05.279 clat percentiles (usec): 00:36:05.279 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:36:05.279 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:36:05.279 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 204], 95.00th=[ 217], 00:36:05.279 | 99.00th=[ 243], 99.50th=[ 255], 99.90th=[ 306], 99.95th=[ 363], 00:36:05.279 | 99.99th=[ 363] 00:36:05.279 bw ( KiB/s): min= 8192, max= 8192, per=30.77%, avg=8192.00, stdev= 0.00, samples=1 00:36:05.279 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:36:05.279 lat (usec) : 250=84.63%, 500=14.53%, 750=0.32% 00:36:05.279 lat (msec) : 2=0.04%, 10=0.08%, 50=0.40% 00:36:05.279 cpu : usr=1.40%, sys=2.40%, ctx=2501, majf=0, minf=1 00:36:05.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.280 issued rwts: total=1024,1475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:05.280 00:36:05.280 Run status group 0 (all jobs): 00:36:05.280 READ: bw=22.0MiB/s (23.1MB/s), 4092KiB/s-8639KiB/s (4190kB/s-8847kB/s), io=22.0MiB (23.1MB), run=1001-1001msec 00:36:05.280 WRITE: bw=26.0MiB/s (27.3MB/s), 4360KiB/s-9.99MiB/s (4464kB/s-10.5MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:36:05.280 00:36:05.280 Disk stats (read/write): 00:36:05.280 nvme0n1: ios=560/941, merge=0/0, ticks=1023/164, in_queue=1187, util=84.47% 00:36:05.280 nvme0n2: ios=1924/2048, merge=0/0, ticks=1016/364, in_queue=1380, util=87.32% 00:36:05.280 nvme0n3: ios=1081/1208, merge=0/0, ticks=653/194, in_queue=847, util=93.11% 00:36:05.280 nvme0n4: ios=1042/1024, merge=0/0, ticks=1541/158, in_queue=1699, util=93.78% 00:36:05.280 14:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:05.280 [global] 00:36:05.280 thread=1 00:36:05.280 invalidate=1 00:36:05.280 rw=randwrite 00:36:05.280 time_based=1 00:36:05.280 runtime=1 00:36:05.280 ioengine=libaio 00:36:05.280 direct=1 00:36:05.280 bs=4096 00:36:05.280 iodepth=1 00:36:05.280 norandommap=0 00:36:05.280 numjobs=1 00:36:05.280 00:36:05.280 verify_dump=1 00:36:05.280 verify_backlog=512 00:36:05.280 verify_state_save=0 00:36:05.280 do_verify=1 00:36:05.280 verify=crc32c-intel 00:36:05.280 [job0] 00:36:05.280 filename=/dev/nvme0n1 00:36:05.280 [job1] 00:36:05.280 filename=/dev/nvme0n2 00:36:05.280 [job2] 00:36:05.280 filename=/dev/nvme0n3 00:36:05.280 [job3] 00:36:05.280 filename=/dev/nvme0n4 00:36:05.280 Could not set queue depth (nvme0n1) 00:36:05.280 Could not set queue depth (nvme0n2) 00:36:05.280 Could not set queue depth (nvme0n3) 00:36:05.280 Could not set queue depth (nvme0n4) 00:36:05.537 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:05.537 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:05.537 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:05.537 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:05.537 fio-3.35 00:36:05.537 Starting 4 threads 00:36:06.932 00:36:06.932 job0: (groupid=0, jobs=1): err= 0: pid=896805: Thu Dec 5 14:07:49 2024 00:36:06.932 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:36:06.932 slat (nsec): min=10635, max=27704, avg=21302.68, stdev=2761.95 00:36:06.932 clat (usec): min=40867, max=42108, avg=41032.79, stdev=261.68 00:36:06.932 lat (usec): min=40889, max=42136, avg=41054.10, stdev=262.20 00:36:06.932 clat percentiles (usec): 00:36:06.932 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:36:06.932 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:06.932 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:06.932 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:06.932 | 99.99th=[42206] 00:36:06.932 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:36:06.932 slat (nsec): min=10861, max=69601, avg=13283.60, stdev=4249.62 00:36:06.932 clat (usec): min=142, max=348, avg=185.98, stdev=23.86 00:36:06.932 lat (usec): min=153, max=360, avg=199.26, stdev=25.22 00:36:06.932 clat percentiles (usec): 00:36:06.932 | 1.00th=[ 145], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:36:06.932 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:36:06.932 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 217], 95.00th=[ 235], 00:36:06.932 | 99.00th=[ 265], 99.50th=[ 293], 99.90th=[ 351], 99.95th=[ 351], 00:36:06.932 | 99.99th=[ 351] 00:36:06.932 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:36:06.932 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:06.932 lat (usec) : 250=94.19%, 500=1.69% 00:36:06.932 lat (msec) : 50=4.12% 00:36:06.932 cpu : usr=0.40%, sys=1.09%, ctx=534, majf=0, minf=2 00:36:06.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.932 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:06.932 job1: (groupid=0, jobs=1): err= 0: pid=896806: Thu Dec 5 14:07:49 2024 00:36:06.932 read: IOPS=22, BW=89.4KiB/s (91.6kB/s)(92.0KiB/1029msec) 00:36:06.932 slat (nsec): min=5346, max=24503, avg=21891.74, stdev=3672.25 00:36:06.932 clat (usec): min=40827, max=41130, avg=40960.22, stdev=70.38 00:36:06.932 lat (usec): min=40849, max=41153, avg=40982.11, stdev=71.30 00:36:06.932 clat percentiles (usec): 00:36:06.932 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:36:06.932 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:06.932 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:06.932 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:06.932 | 99.99th=[41157] 00:36:06.932 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:36:06.932 slat (nsec): min=3333, max=35847, avg=5227.37, stdev=3539.66 00:36:06.932 clat (usec): min=122, max=368, avg=154.45, stdev=27.76 00:36:06.932 lat (usec): min=126, max=404, avg=159.68, stdev=30.57 00:36:06.932 clat percentiles (usec): 00:36:06.932 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:36:06.932 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:36:06.932 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 176], 95.00th=[ 225], 00:36:06.932 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 371], 99.95th=[ 371], 00:36:06.932 | 99.99th=[ 371] 00:36:06.932 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:36:06.932 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:06.932 lat (usec) : 250=93.64%, 500=2.06% 00:36:06.932 lat (msec) : 50=4.30% 00:36:06.932 cpu : usr=0.00%, sys=0.68%, ctx=537, majf=0, minf=1 00:36:06.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.932 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:06.932 job2: (groupid=0, jobs=1): err= 0: pid=896807: Thu Dec 5 14:07:49 2024 00:36:06.932 read: IOPS=22, BW=90.3KiB/s (92.5kB/s)(92.0KiB/1019msec) 00:36:06.932 slat (nsec): min=8517, max=26222, avg=22895.96, stdev=4417.36 00:36:06.932 clat (usec): min=207, max=41999, avg=39341.24, stdev=8537.74 00:36:06.932 lat (usec): min=217, max=42022, avg=39364.14, stdev=8540.52 00:36:06.932 clat percentiles (usec): 00:36:06.932 | 1.00th=[ 208], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:06.932 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:06.932 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:36:06.932 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:06.932 | 99.99th=[42206] 00:36:06.932 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:36:06.932 slat (nsec): min=10536, max=29250, avg=12081.05, stdev=1717.82 00:36:06.932 clat (usec): min=146, max=3210, avg=199.73, stdev=137.82 00:36:06.932 lat (usec): min=157, max=3233, avg=211.81, stdev=138.34 00:36:06.932 clat percentiles (usec): 00:36:06.932 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:36:06.932 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:36:06.932 | 70.00th=[ 208], 80.00th=[ 231], 90.00th=[ 249], 95.00th=[ 262], 00:36:06.932 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 3195], 99.95th=[ 3195], 00:36:06.932 | 99.99th=[ 3195] 00:36:06.932 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:36:06.932 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:06.932 lat (usec) : 250=87.66%, 500=8.04% 00:36:06.932 lat (msec) : 4=0.19%, 50=4.11% 00:36:06.932 cpu : usr=0.49%, sys=0.88%, ctx=536, majf=0, minf=1 00:36:06.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.932 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:06.932 job3: (groupid=0, jobs=1): err= 0: pid=896808: Thu Dec 5 14:07:49 2024 00:36:06.932 read: IOPS=23, BW=94.1KiB/s (96.4kB/s)(96.0KiB/1020msec) 00:36:06.932 slat (nsec): min=10553, max=35534, avg=23506.88, stdev=5614.17 00:36:06.932 clat (usec): min=208, max=42077, avg=37728.46, stdev=11561.11 00:36:06.932 lat (usec): min=233, max=42100, avg=37751.97, stdev=11559.79 00:36:06.932 clat percentiles (usec): 00:36:06.932 | 1.00th=[ 210], 5.00th=[ 225], 10.00th=[40633], 20.00th=[40633], 00:36:06.932 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:06.932 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:36:06.932 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:06.932 | 99.99th=[42206] 00:36:06.932 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:36:06.932 slat (nsec): min=10777, max=41074, avg=13543.55, stdev=2704.40 00:36:06.932 clat (usec): min=150, max=432, avg=198.48, stdev=36.80 00:36:06.932 lat (usec): min=161, max=448, avg=212.02, stdev=36.75 00:36:06.932 clat percentiles (usec): 00:36:06.932 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:36:06.932 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 194], 00:36:06.932 | 70.00th=[ 221], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 260], 00:36:06.932 | 99.00th=[ 310], 99.50th=[ 334], 99.90th=[ 433], 99.95th=[ 433], 00:36:06.932 | 99.99th=[ 433] 00:36:06.932 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:36:06.932 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:06.932 lat (usec) : 250=88.25%, 500=7.65% 00:36:06.932 lat (msec) : 50=4.10% 00:36:06.932 cpu : usr=0.29%, sys=1.08%, ctx=537, majf=0, minf=1 00:36:06.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.933 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.933 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:06.933 00:36:06.933 Run status group 0 (all jobs): 00:36:06.933 READ: bw=358KiB/s (366kB/s), 87.4KiB/s-94.1KiB/s (89.5kB/s-96.4kB/s), io=368KiB (377kB), run=1007-1029msec 00:36:06.933 WRITE: bw=7961KiB/s (8152kB/s), 1990KiB/s-2034KiB/s (2038kB/s-2083kB/s), io=8192KiB (8389kB), run=1007-1029msec 00:36:06.933 00:36:06.933 Disk stats (read/write): 00:36:06.933 nvme0n1: ios=68/512, merge=0/0, ticks=754/94, in_queue=848, util=86.87% 00:36:06.933 nvme0n2: ios=62/512, merge=0/0, ticks=1239/77, in_queue=1316, util=98.27% 00:36:06.933 nvme0n3: ios=54/512, merge=0/0, ticks=1623/99, in_queue=1722, util=95.94% 00:36:06.933 nvme0n4: ios=77/512, merge=0/0, ticks=879/95, in_queue=974, util=98.11% 00:36:06.933 14:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:36:06.933 [global] 00:36:06.933 thread=1 00:36:06.933 invalidate=1 00:36:06.933 rw=write 00:36:06.933 time_based=1 00:36:06.933 runtime=1 00:36:06.933 ioengine=libaio 00:36:06.933 direct=1 00:36:06.933 bs=4096 00:36:06.933 iodepth=128 00:36:06.933 norandommap=0 00:36:06.933 numjobs=1 00:36:06.933 00:36:06.933 verify_dump=1 00:36:06.933 verify_backlog=512 00:36:06.933 verify_state_save=0 00:36:06.933 do_verify=1 00:36:06.933 verify=crc32c-intel 00:36:06.933 [job0] 00:36:06.933 filename=/dev/nvme0n1 00:36:06.933 [job1] 00:36:06.933 filename=/dev/nvme0n2 00:36:06.933 [job2] 00:36:06.933 filename=/dev/nvme0n3 00:36:06.933 [job3] 00:36:06.933 filename=/dev/nvme0n4 00:36:06.933 Could not set queue depth (nvme0n1) 00:36:06.933 Could not set queue depth (nvme0n2) 00:36:06.933 Could not set queue depth (nvme0n3) 00:36:06.933 Could not set queue depth (nvme0n4) 00:36:06.933 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:06.933 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:06.933 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:06.933 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:06.933 fio-3.35 00:36:06.933 Starting 4 threads 00:36:08.305 00:36:08.305 job0: (groupid=0, jobs=1): err= 0: pid=897182: Thu Dec 5 14:07:50 2024 00:36:08.305 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:36:08.305 slat (nsec): min=1448, max=8871.8k, avg=96594.34, stdev=541089.94 00:36:08.305 clat (usec): min=6128, max=41441, avg=12111.51, stdev=3907.53 00:36:08.305 lat (usec): min=6131, max=41450, avg=12208.11, stdev=3949.96 00:36:08.305 clat percentiles (usec): 00:36:08.305 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10159], 00:36:08.305 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:36:08.305 | 70.00th=[11469], 80.00th=[12387], 90.00th=[17957], 95.00th=[21365], 00:36:08.305 | 99.00th=[24249], 99.50th=[29230], 99.90th=[41681], 99.95th=[41681], 00:36:08.305 | 99.99th=[41681] 00:36:08.305 write: IOPS=5112, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:36:08.305 slat (usec): min=2, max=23156, avg=94.82, stdev=657.14 00:36:08.305 clat (usec): min=561, max=59135, avg=12580.97, stdev=7559.76 00:36:08.305 lat (usec): min=5786, max=59164, avg=12675.79, stdev=7602.98 00:36:08.305 clat percentiles (usec): 00:36:08.305 | 1.00th=[ 7701], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9896], 00:36:08.305 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:36:08.305 | 70.00th=[10945], 80.00th=[11469], 90.00th=[16581], 95.00th=[35914], 00:36:08.305 | 99.00th=[45876], 99.50th=[45876], 99.90th=[52691], 99.95th=[52691], 00:36:08.305 | 99.99th=[58983] 00:36:08.305 bw ( KiB/s): min=20480, max=20480, per=28.85%, avg=20480.00, stdev= 0.00, samples=2 00:36:08.305 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:36:08.305 lat (usec) : 750=0.01% 00:36:08.305 lat (msec) : 10=21.20%, 20=71.54%, 50=7.16%, 100=0.09% 00:36:08.305 cpu : usr=3.29%, sys=3.89%, ctx=502, majf=0, minf=1 00:36:08.305 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:36:08.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:08.305 issued rwts: total=5120,5128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.305 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:08.305 job1: (groupid=0, jobs=1): err= 0: pid=897183: Thu Dec 5 14:07:50 2024 00:36:08.305 read: IOPS=3821, BW=14.9MiB/s (15.7MB/s)(15.0MiB/1004msec) 00:36:08.305 slat (nsec): min=1042, max=24375k, avg=136485.52, stdev=1021147.19 00:36:08.305 clat (usec): min=708, max=91092, avg=18375.56, stdev=12196.45 00:36:08.305 lat (usec): min=5866, max=93730, avg=18512.04, stdev=12279.57 00:36:08.305 clat percentiles (usec): 00:36:08.305 | 1.00th=[ 6390], 5.00th=[ 8291], 10.00th=[10552], 20.00th=[10945], 00:36:08.305 | 30.00th=[11863], 40.00th=[12256], 50.00th=[13829], 60.00th=[15795], 00:36:08.305 | 70.00th=[18482], 80.00th=[23200], 90.00th=[34341], 95.00th=[44303], 00:36:08.305 | 99.00th=[67634], 99.50th=[71828], 99.90th=[83362], 99.95th=[90702], 00:36:08.305 | 99.99th=[90702] 00:36:08.305 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:36:08.305 slat (nsec): min=1868, max=14225k, avg=111876.77, stdev=763298.36 00:36:08.305 clat (usec): min=4739, max=72495, avg=13691.84, stdev=8708.58 00:36:08.305 lat (usec): min=4743, max=72502, avg=13803.71, stdev=8771.64 00:36:08.305 clat percentiles (usec): 00:36:08.305 | 1.00th=[ 4948], 5.00th=[ 7046], 10.00th=[ 8586], 20.00th=[ 9765], 00:36:08.305 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:36:08.305 | 70.00th=[14353], 80.00th=[15795], 90.00th=[16909], 95.00th=[23462], 00:36:08.305 | 99.00th=[64750], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:36:08.305 | 99.99th=[72877] 00:36:08.305 bw ( KiB/s): min=16384, max=16384, per=23.08%, avg=16384.00, stdev= 0.00, samples=2 00:36:08.305 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:36:08.305 lat (usec) : 750=0.01% 00:36:08.305 lat (msec) : 10=16.40%, 20=67.67%, 50=13.07%, 100=2.85% 00:36:08.305 cpu : usr=1.89%, sys=4.49%, ctx=278, majf=0, minf=1 00:36:08.305 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:36:08.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:08.305 issued rwts: total=3837,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.305 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:08.305 job2: (groupid=0, jobs=1): err= 0: pid=897184: Thu Dec 5 14:07:50 2024 00:36:08.305 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:36:08.305 slat (nsec): min=1274, max=10591k, avg=113371.02, stdev=706553.99 00:36:08.305 clat (usec): min=6953, max=44946, avg=15081.56, stdev=5872.02 00:36:08.305 lat (usec): min=6961, max=46228, avg=15194.93, stdev=5919.79 00:36:08.305 clat percentiles (usec): 00:36:08.305 | 1.00th=[ 7963], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11207], 00:36:08.305 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12256], 60.00th=[13960], 00:36:08.305 | 70.00th=[16909], 80.00th=[18482], 90.00th=[23987], 95.00th=[27919], 00:36:08.305 | 99.00th=[33424], 99.50th=[39584], 99.90th=[44827], 99.95th=[44827], 00:36:08.305 | 99.99th=[44827] 00:36:08.305 write: IOPS=4312, BW=16.8MiB/s (17.7MB/s)(17.0MiB/1007msec); 0 zone resets 00:36:08.305 slat (nsec): min=1927, max=13157k, avg=117096.97, stdev=712925.44 00:36:08.305 clat (usec): min=5700, max=43853, avg=15018.07, stdev=6685.85 00:36:08.305 lat (usec): min=6191, max=43865, avg=15135.17, stdev=6739.19 00:36:08.305 clat percentiles (usec): 00:36:08.305 | 1.00th=[ 8291], 5.00th=[10290], 10.00th=[10683], 20.00th=[10945], 00:36:08.305 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11600], 60.00th=[12256], 00:36:08.305 | 70.00th=[14746], 80.00th=[19530], 90.00th=[24773], 95.00th=[28443], 00:36:08.305 | 99.00th=[39060], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:36:08.306 | 99.99th=[43779] 00:36:08.306 bw ( KiB/s): min=13928, max=19800, per=23.76%, avg=16864.00, stdev=4152.13, samples=2 00:36:08.306 iops : min= 3482, max= 4950, avg=4216.00, stdev=1038.03, samples=2 00:36:08.306 lat (msec) : 10=5.18%, 20=77.06%, 50=17.76% 00:36:08.306 cpu : usr=3.28%, sys=5.96%, ctx=382, majf=0, minf=1 00:36:08.306 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:36:08.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:08.306 issued rwts: total=4096,4343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:08.306 job3: (groupid=0, jobs=1): err= 0: pid=897185: Thu Dec 5 14:07:50 2024 00:36:08.306 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:36:08.306 slat (nsec): min=1373, max=12924k, avg=105043.80, stdev=805638.30 00:36:08.306 clat (usec): min=3148, max=60028, avg=13451.50, stdev=5975.05 00:36:08.306 lat (usec): min=3153, max=60037, avg=13556.55, stdev=6048.93 00:36:08.306 clat percentiles (usec): 00:36:08.306 | 1.00th=[ 5276], 5.00th=[ 9372], 10.00th=[ 9372], 20.00th=[10159], 00:36:08.306 | 30.00th=[10683], 40.00th=[11338], 50.00th=[12125], 60.00th=[12780], 00:36:08.306 | 70.00th=[13960], 80.00th=[15664], 90.00th=[17695], 95.00th=[21365], 00:36:08.306 | 99.00th=[44827], 99.50th=[54789], 99.90th=[60031], 99.95th=[60031], 00:36:08.306 | 99.99th=[60031] 00:36:08.306 write: IOPS=4314, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1010msec); 0 zone resets 00:36:08.306 slat (usec): min=2, max=11771, avg=113.76, stdev=697.41 00:36:08.306 clat (usec): min=1191, max=60028, avg=16761.76, stdev=11850.86 00:36:08.306 lat (usec): min=1200, max=60038, avg=16875.51, stdev=11933.67 00:36:08.306 clat percentiles (usec): 00:36:08.306 | 1.00th=[ 4228], 5.00th=[ 7046], 10.00th=[ 8717], 20.00th=[10159], 00:36:08.306 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[13173], 00:36:08.306 | 70.00th=[15008], 80.00th=[21365], 90.00th=[36963], 95.00th=[49021], 00:36:08.306 | 99.00th=[55313], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:36:08.306 | 99.99th=[60031] 00:36:08.306 bw ( KiB/s): min=15416, max=18432, per=23.84%, avg=16924.00, stdev=2132.63, samples=2 00:36:08.306 iops : min= 3854, max= 4608, avg=4231.00, stdev=533.16, samples=2 00:36:08.306 lat (msec) : 2=0.05%, 4=0.71%, 10=16.10%, 20=67.96%, 50=12.53% 00:36:08.306 lat (msec) : 100=2.66% 00:36:08.306 cpu : usr=2.97%, sys=6.05%, ctx=341, majf=0, minf=1 00:36:08.306 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:36:08.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:08.306 issued rwts: total=4096,4358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:08.306 00:36:08.306 Run status group 0 (all jobs): 00:36:08.306 READ: bw=66.3MiB/s (69.5MB/s), 14.9MiB/s-19.9MiB/s (15.7MB/s-20.9MB/s), io=67.0MiB (70.2MB), run=1003-1010msec 00:36:08.306 WRITE: bw=69.3MiB/s (72.7MB/s), 15.9MiB/s-20.0MiB/s (16.7MB/s-20.9MB/s), io=70.0MiB (73.4MB), run=1003-1010msec 00:36:08.306 00:36:08.306 Disk stats (read/write): 00:36:08.306 nvme0n1: ios=4515/4608, merge=0/0, ticks=13537/14908, in_queue=28445, util=86.07% 00:36:08.306 nvme0n2: ios=3107/3347, merge=0/0, ticks=27541/22796, in_queue=50337, util=98.48% 00:36:08.306 nvme0n3: ios=3603/3717, merge=0/0, ticks=20067/17437, in_queue=37504, util=98.34% 00:36:08.306 nvme0n4: ios=3614/3871, merge=0/0, ticks=45438/55980, in_queue=101418, util=100.00% 00:36:08.306 14:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:36:08.306 [global] 00:36:08.306 thread=1 00:36:08.306 invalidate=1 00:36:08.306 rw=randwrite 00:36:08.306 time_based=1 00:36:08.306 runtime=1 00:36:08.306 ioengine=libaio 00:36:08.306 direct=1 00:36:08.306 bs=4096 00:36:08.306 iodepth=128 00:36:08.306 norandommap=0 00:36:08.306 numjobs=1 00:36:08.306 00:36:08.306 verify_dump=1 00:36:08.306 verify_backlog=512 00:36:08.306 verify_state_save=0 00:36:08.306 do_verify=1 00:36:08.306 verify=crc32c-intel 00:36:08.306 [job0] 00:36:08.306 filename=/dev/nvme0n1 00:36:08.306 [job1] 00:36:08.306 filename=/dev/nvme0n2 00:36:08.306 [job2] 00:36:08.306 filename=/dev/nvme0n3 00:36:08.306 [job3] 00:36:08.306 filename=/dev/nvme0n4 00:36:08.306 Could not set queue depth (nvme0n1) 00:36:08.306 Could not set queue depth (nvme0n2) 00:36:08.306 Could not set queue depth (nvme0n3) 00:36:08.306 Could not set queue depth (nvme0n4) 00:36:08.564 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:08.564 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:08.564 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:08.564 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:08.564 fio-3.35 00:36:08.564 Starting 4 threads 00:36:09.941 00:36:09.941 job0: (groupid=0, jobs=1): err= 0: pid=897553: Thu Dec 5 14:07:52 2024 00:36:09.941 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:36:09.941 slat (nsec): min=994, max=12449k, avg=109632.14, stdev=666571.90 00:36:09.941 clat (usec): min=6092, max=54799, avg=14534.74, stdev=5026.39 00:36:09.941 lat (usec): min=6096, max=54801, avg=14644.38, stdev=5048.03 00:36:09.941 clat percentiles (usec): 00:36:09.941 | 1.00th=[ 7439], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10028], 00:36:09.941 | 30.00th=[10290], 40.00th=[10683], 50.00th=[13566], 60.00th=[15139], 00:36:09.941 | 70.00th=[17695], 80.00th=[19006], 90.00th=[21103], 95.00th=[22676], 00:36:09.941 | 99.00th=[25560], 99.50th=[27657], 99.90th=[54789], 99.95th=[54789], 00:36:09.941 | 99.99th=[54789] 00:36:09.941 write: IOPS=4532, BW=17.7MiB/s (18.6MB/s)(17.7MiB/1001msec); 0 zone resets 00:36:09.941 slat (nsec): min=1736, max=8756.3k, avg=115775.01, stdev=619174.89 00:36:09.941 clat (usec): min=234, max=42069, avg=14705.45, stdev=6792.88 00:36:09.941 lat (usec): min=2545, max=42081, avg=14821.23, stdev=6822.04 00:36:09.941 clat percentiles (usec): 00:36:09.941 | 1.00th=[ 5538], 5.00th=[ 7570], 10.00th=[ 9634], 20.00th=[10028], 00:36:09.941 | 30.00th=[10290], 40.00th=[10683], 50.00th=[12256], 60.00th=[15008], 00:36:09.941 | 70.00th=[16319], 80.00th=[17957], 90.00th=[23462], 95.00th=[29754], 00:36:09.941 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:09.941 | 99.99th=[42206] 00:36:09.941 bw ( KiB/s): min=17920, max=17920, per=27.85%, avg=17920.00, stdev= 0.00, samples=1 00:36:09.941 iops : min= 4480, max= 4480, avg=4480.00, stdev= 0.00, samples=1 00:36:09.941 lat (usec) : 250=0.01% 00:36:09.941 lat (msec) : 4=0.43%, 10=18.83%, 20=65.75%, 50=14.87%, 100=0.10% 00:36:09.941 cpu : usr=2.70%, sys=4.80%, ctx=418, majf=0, minf=2 00:36:09.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:36:09.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:09.941 issued rwts: total=4096,4537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:09.941 job1: (groupid=0, jobs=1): err= 0: pid=897554: Thu Dec 5 14:07:52 2024 00:36:09.941 read: IOPS=4404, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1046msec) 00:36:09.941 slat (nsec): min=1614, max=9074.4k, avg=107950.11, stdev=634284.06 00:36:09.941 clat (usec): min=7587, max=63674, avg=15028.70, stdev=8268.88 00:36:09.941 lat (usec): min=7594, max=63680, avg=15136.65, stdev=8309.19 00:36:09.941 clat percentiles (usec): 00:36:09.941 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:36:09.941 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12649], 60.00th=[13698], 00:36:09.941 | 70.00th=[14222], 80.00th=[16188], 90.00th=[23987], 95.00th=[29492], 00:36:09.941 | 99.00th=[60031], 99.50th=[62129], 99.90th=[63701], 99.95th=[63701], 00:36:09.941 | 99.99th=[63701] 00:36:09.941 write: IOPS=4405, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1046msec); 0 zone resets 00:36:09.941 slat (usec): min=2, max=8221, avg=102.55, stdev=574.41 00:36:09.941 clat (usec): min=6915, max=35082, avg=13603.31, stdev=4387.16 00:36:09.941 lat (usec): min=6919, max=35098, avg=13705.86, stdev=4444.78 00:36:09.941 clat percentiles (usec): 00:36:09.941 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10028], 00:36:09.941 | 30.00th=[10814], 40.00th=[11600], 50.00th=[12125], 60.00th=[13435], 00:36:09.941 | 70.00th=[14222], 80.00th=[15926], 90.00th=[18744], 95.00th=[25822], 00:36:09.941 | 99.00th=[26608], 99.50th=[27395], 99.90th=[34341], 99.95th=[34341], 00:36:09.941 | 99.99th=[34866] 00:36:09.941 bw ( KiB/s): min=16384, max=20480, per=28.65%, avg=18432.00, stdev=2896.31, samples=2 00:36:09.941 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:36:09.941 lat (msec) : 10=13.72%, 20=75.58%, 50=9.79%, 100=0.91% 00:36:09.941 cpu : usr=4.11%, sys=5.93%, ctx=398, majf=0, minf=1 00:36:09.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:36:09.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:09.941 issued rwts: total=4607,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:09.941 job2: (groupid=0, jobs=1): err= 0: pid=897555: Thu Dec 5 14:07:52 2024 00:36:09.941 read: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1005msec) 00:36:09.941 slat (nsec): min=1607, max=13293k, avg=112766.03, stdev=792780.84 00:36:09.941 clat (usec): min=702, max=34176, avg=14359.63, stdev=4282.24 00:36:09.941 lat (usec): min=4406, max=34187, avg=14472.39, stdev=4350.27 00:36:09.941 clat percentiles (usec): 00:36:09.941 | 1.00th=[ 6325], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[10814], 00:36:09.941 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13829], 60.00th=[14353], 00:36:09.941 | 70.00th=[15926], 80.00th=[17433], 90.00th=[19006], 95.00th=[22676], 00:36:09.941 | 99.00th=[30540], 99.50th=[31327], 99.90th=[34341], 99.95th=[34341], 00:36:09.941 | 99.99th=[34341] 00:36:09.941 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:36:09.941 slat (usec): min=2, max=11661, avg=126.01, stdev=762.43 00:36:09.941 clat (usec): min=3247, max=75987, avg=16301.30, stdev=10267.86 00:36:09.941 lat (usec): min=3256, max=75995, avg=16427.31, stdev=10342.43 00:36:09.941 clat percentiles (usec): 00:36:09.941 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 8160], 20.00th=[ 8717], 00:36:09.941 | 30.00th=[10290], 40.00th=[11600], 50.00th=[13698], 60.00th=[15401], 00:36:09.941 | 70.00th=[17695], 80.00th=[21365], 90.00th=[30540], 95.00th=[31851], 00:36:09.941 | 99.00th=[72877], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:36:09.941 | 99.99th=[76022] 00:36:09.941 bw ( KiB/s): min=16384, max=16384, per=25.46%, avg=16384.00, stdev= 0.00, samples=2 00:36:09.941 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:36:09.941 lat (usec) : 750=0.01% 00:36:09.941 lat (msec) : 4=0.15%, 10=17.64%, 20=67.28%, 50=14.12%, 100=0.80% 00:36:09.941 cpu : usr=3.78%, sys=5.08%, ctx=326, majf=0, minf=1 00:36:09.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:36:09.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:09.941 issued rwts: total=3943,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:09.941 job3: (groupid=0, jobs=1): err= 0: pid=897556: Thu Dec 5 14:07:52 2024 00:36:09.941 read: IOPS=3203, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1002msec) 00:36:09.941 slat (nsec): min=1546, max=22349k, avg=147438.32, stdev=1071461.22 00:36:09.941 clat (usec): min=613, max=69944, avg=20106.62, stdev=14634.25 00:36:09.941 lat (usec): min=4659, max=69963, avg=20254.06, stdev=14738.07 00:36:09.941 clat percentiles (usec): 00:36:09.941 | 1.00th=[ 4752], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[ 9634], 00:36:09.941 | 30.00th=[11338], 40.00th=[12649], 50.00th=[14353], 60.00th=[16319], 00:36:09.941 | 70.00th=[18220], 80.00th=[33162], 90.00th=[44827], 95.00th=[53740], 00:36:09.941 | 99.00th=[63701], 99.50th=[65274], 99.90th=[69731], 99.95th=[69731], 00:36:09.941 | 99.99th=[69731] 00:36:09.941 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:36:09.941 slat (usec): min=2, max=22736, avg=137.71, stdev=1010.57 00:36:09.941 clat (usec): min=1872, max=67600, avg=17354.13, stdev=11710.21 00:36:09.941 lat (usec): min=1877, max=67633, avg=17491.84, stdev=11836.34 00:36:09.941 clat percentiles (usec): 00:36:09.941 | 1.00th=[ 3294], 5.00th=[ 8160], 10.00th=[ 9110], 20.00th=[ 9765], 00:36:09.941 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12125], 60.00th=[13566], 00:36:09.941 | 70.00th=[17957], 80.00th=[21627], 90.00th=[38011], 95.00th=[43254], 00:36:09.941 | 99.00th=[54264], 99.50th=[54264], 99.90th=[62653], 99.95th=[64750], 00:36:09.941 | 99.99th=[67634] 00:36:09.941 bw ( KiB/s): min=12288, max=16384, per=22.28%, avg=14336.00, stdev=2896.31, samples=2 00:36:09.941 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:36:09.941 lat (usec) : 750=0.01% 00:36:09.942 lat (msec) : 2=0.12%, 4=0.44%, 10=21.53%, 20=52.83%, 50=19.16% 00:36:09.942 lat (msec) : 100=5.90% 00:36:09.942 cpu : usr=2.70%, sys=4.10%, ctx=292, majf=0, minf=1 00:36:09.942 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:36:09.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:09.942 issued rwts: total=3210,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:09.942 00:36:09.942 Run status group 0 (all jobs): 00:36:09.942 READ: bw=59.2MiB/s (62.1MB/s), 12.5MiB/s-17.2MiB/s (13.1MB/s-18.0MB/s), io=61.9MiB (64.9MB), run=1001-1046msec 00:36:09.942 WRITE: bw=62.8MiB/s (65.9MB/s), 14.0MiB/s-17.7MiB/s (14.7MB/s-18.6MB/s), io=65.7MiB (68.9MB), run=1001-1046msec 00:36:09.942 00:36:09.942 Disk stats (read/write): 00:36:09.942 nvme0n1: ios=3621/3928, merge=0/0, ticks=16814/15816, in_queue=32630, util=98.50% 00:36:09.942 nvme0n2: ios=4132/4372, merge=0/0, ticks=16786/17085, in_queue=33871, util=98.68% 00:36:09.942 nvme0n3: ios=3330/3584, merge=0/0, ticks=39025/49197, in_queue=88222, util=97.82% 00:36:09.942 nvme0n4: ios=2467/2560, merge=0/0, ticks=21397/22887, in_queue=44284, util=98.01% 00:36:09.942 14:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:36:09.942 14:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=897790 00:36:09.942 14:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:36:09.942 14:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:36:09.942 [global] 00:36:09.942 thread=1 00:36:09.942 invalidate=1 00:36:09.942 rw=read 00:36:09.942 time_based=1 00:36:09.942 runtime=10 00:36:09.942 ioengine=libaio 00:36:09.942 direct=1 00:36:09.942 bs=4096 00:36:09.942 iodepth=1 00:36:09.942 norandommap=1 00:36:09.942 numjobs=1 00:36:09.942 00:36:09.942 [job0] 00:36:09.942 filename=/dev/nvme0n1 00:36:09.942 [job1] 00:36:09.942 filename=/dev/nvme0n2 00:36:09.942 [job2] 00:36:09.942 filename=/dev/nvme0n3 00:36:09.942 [job3] 00:36:09.942 filename=/dev/nvme0n4 00:36:09.942 Could not set queue depth (nvme0n1) 00:36:09.942 Could not set queue depth (nvme0n2) 00:36:09.942 Could not set queue depth (nvme0n3) 00:36:09.942 Could not set queue depth (nvme0n4) 00:36:10.200 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:10.200 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:10.200 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:10.200 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:10.200 fio-3.35 00:36:10.200 Starting 4 threads 00:36:13.485 14:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:36:13.485 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43593728, buflen=4096 00:36:13.485 fio: pid=897953, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:13.486 14:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:36:13.486 14:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:13.486 14:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:36:13.486 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=23552000, buflen=4096 00:36:13.486 fio: pid=897949, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:13.486 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=315392, buflen=4096 00:36:13.486 fio: pid=897926, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:13.486 14:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:13.486 14:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:36:13.745 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=45985792, buflen=4096 00:36:13.745 fio: pid=897931, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:13.745 14:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:13.745 14:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:36:13.745 00:36:13.745 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=897926: Thu Dec 5 14:07:56 2024 00:36:13.745 read: IOPS=25, BW=98.8KiB/s (101kB/s)(308KiB/3117msec) 00:36:13.745 slat (usec): min=11, max=9869, avg=151.62, stdev=1114.63 00:36:13.745 clat (usec): min=308, max=42056, avg=40027.91, stdev=6516.47 00:36:13.745 lat (usec): min=333, max=51901, avg=40181.25, stdev=6650.62 00:36:13.745 clat percentiles (usec): 00:36:13.745 | 1.00th=[ 310], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:36:13.745 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:13.745 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:36:13.745 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:13.745 | 99.99th=[42206] 00:36:13.745 bw ( KiB/s): min= 96, max= 104, per=0.30%, avg=99.50, stdev= 3.99, samples=6 00:36:13.745 iops : min= 24, max= 26, avg=24.83, stdev= 0.98, samples=6 00:36:13.745 lat (usec) : 500=1.28%, 750=1.28% 00:36:13.745 lat (msec) : 50=96.15% 00:36:13.745 cpu : usr=0.13%, sys=0.00%, ctx=82, majf=0, minf=2 00:36:13.745 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.745 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.745 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.745 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:13.745 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=897931: Thu Dec 5 14:07:56 2024 00:36:13.745 read: IOPS=3355, BW=13.1MiB/s (13.7MB/s)(43.9MiB/3346msec) 00:36:13.745 slat (usec): min=6, max=26175, avg=14.35, stdev=344.00 00:36:13.745 clat (usec): min=157, max=41309, avg=280.76, stdev=1088.39 00:36:13.745 lat (usec): min=165, max=41316, avg=295.11, stdev=1142.03 00:36:13.745 clat percentiles (usec): 00:36:13.745 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 231], 00:36:13.745 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:36:13.745 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 285], 95.00th=[ 371], 00:36:13.745 | 99.00th=[ 469], 99.50th=[ 498], 99.90th=[ 644], 99.95th=[40633], 00:36:13.745 | 99.99th=[41157] 00:36:13.745 bw ( KiB/s): min=10256, max=14462, per=39.66%, avg=13133.00, stdev=1544.39, samples=6 00:36:13.745 iops : min= 2564, max= 3615, avg=3283.17, stdev=386.01, samples=6 00:36:13.745 lat (usec) : 250=59.98%, 500=39.62%, 750=0.29% 00:36:13.745 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 50=0.07% 00:36:13.745 cpu : usr=0.84%, sys=2.96%, ctx=11234, majf=0, minf=2 00:36:13.745 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.745 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.745 issued rwts: total=11228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.745 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:13.745 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=897949: Thu Dec 5 14:07:56 2024 00:36:13.745 read: IOPS=1973, BW=7893KiB/s (8082kB/s)(22.5MiB/2914msec) 00:36:13.745 slat (nsec): min=5144, max=39967, avg=8872.95, stdev=1428.67 00:36:13.745 clat (usec): min=189, max=44211, avg=492.26, stdev=2901.74 00:36:13.745 lat (usec): min=198, max=44220, avg=501.13, stdev=2901.72 00:36:13.745 clat percentiles (usec): 00:36:13.745 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 217], 20.00th=[ 231], 00:36:13.745 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 255], 60.00th=[ 269], 00:36:13.745 | 70.00th=[ 297], 80.00th=[ 363], 90.00th=[ 396], 95.00th=[ 494], 00:36:13.745 | 99.00th=[ 515], 99.50th=[40633], 99.90th=[41681], 99.95th=[42206], 00:36:13.745 | 99.99th=[44303] 00:36:13.745 bw ( KiB/s): min= 4672, max=14704, per=27.74%, avg=9184.00, stdev=3741.55, samples=5 00:36:13.745 iops : min= 1168, max= 3676, avg=2296.00, stdev=935.39, samples=5 00:36:13.745 lat (usec) : 250=46.64%, 500=49.24%, 750=3.60% 00:36:13.745 lat (msec) : 50=0.50% 00:36:13.745 cpu : usr=1.48%, sys=2.95%, ctx=5753, majf=0, minf=1 00:36:13.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.746 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.746 issued rwts: total=5751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:13.746 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=897953: Thu Dec 5 14:07:56 2024 00:36:13.746 read: IOPS=3980, BW=15.5MiB/s (16.3MB/s)(41.6MiB/2674msec) 00:36:13.746 slat (nsec): min=6336, max=40374, avg=7497.69, stdev=1104.85 00:36:13.746 clat (usec): min=176, max=525, avg=240.55, stdev=49.31 00:36:13.746 lat (usec): min=184, max=535, avg=248.04, stdev=49.56 00:36:13.746 clat percentiles (usec): 00:36:13.746 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 212], 00:36:13.746 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 239], 00:36:13.746 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 302], 00:36:13.746 | 99.00th=[ 502], 99.50th=[ 506], 99.90th=[ 510], 99.95th=[ 515], 00:36:13.746 | 99.99th=[ 519] 00:36:13.746 bw ( KiB/s): min=13080, max=17488, per=48.17%, avg=15948.80, stdev=1675.48, samples=5 00:36:13.746 iops : min= 3270, max= 4372, avg=3987.20, stdev=418.87, samples=5 00:36:13.746 lat (usec) : 250=73.49%, 500=25.22%, 750=1.29% 00:36:13.746 cpu : usr=1.31%, sys=4.15%, ctx=10644, majf=0, minf=2 00:36:13.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.746 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.746 issued rwts: total=10644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:13.746 00:36:13.746 Run status group 0 (all jobs): 00:36:13.746 READ: bw=32.3MiB/s (33.9MB/s), 98.8KiB/s-15.5MiB/s (101kB/s-16.3MB/s), io=108MiB (113MB), run=2674-3346msec 00:36:13.746 00:36:13.746 Disk stats (read/write): 00:36:13.746 nvme0n1: ios=101/0, merge=0/0, ticks=3419/0, in_queue=3419, util=98.92% 00:36:13.746 nvme0n2: ios=11218/0, merge=0/0, ticks=3085/0, in_queue=3085, util=93.54% 00:36:13.746 nvme0n3: ios=5778/0, merge=0/0, ticks=3198/0, in_queue=3198, util=99.01% 00:36:13.746 nvme0n4: ios=10213/0, merge=0/0, ticks=2404/0, in_queue=2404, util=96.38% 00:36:14.005 14:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:14.005 14:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:36:14.264 14:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:14.264 14:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:36:14.264 14:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:14.264 14:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:36:14.523 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:14.523 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 897790 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:14.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:36:14.783 nvmf hotplug test: fio failed as expected 00:36:14.783 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:15.041 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:36:15.041 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:36:15.041 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:36:15.041 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:36:15.041 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:36:15.041 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:15.041 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:36:15.041 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:15.041 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:36:15.041 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:15.041 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:15.041 rmmod nvme_tcp 00:36:15.041 rmmod nvme_fabrics 00:36:15.041 rmmod nvme_keyring 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 895320 ']' 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 895320 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 895320 ']' 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 895320 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 895320 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 895320' 00:36:15.301 killing process with pid 895320 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 895320 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 895320 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.301 14:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.840 14:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:17.840 00:36:17.840 real 0m25.902s 00:36:17.840 user 1m31.893s 00:36:17.840 sys 0m11.319s 00:36:17.840 14:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:17.840 14:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:17.840 ************************************ 00:36:17.840 END TEST nvmf_fio_target 00:36:17.840 ************************************ 00:36:17.840 14:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:17.840 14:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:17.840 14:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:17.840 14:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:17.840 ************************************ 00:36:17.840 START TEST nvmf_bdevio 00:36:17.840 ************************************ 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:17.840 * Looking for test storage... 00:36:17.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:17.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.840 --rc genhtml_branch_coverage=1 00:36:17.840 --rc genhtml_function_coverage=1 00:36:17.840 --rc genhtml_legend=1 00:36:17.840 --rc geninfo_all_blocks=1 00:36:17.840 --rc geninfo_unexecuted_blocks=1 00:36:17.840 00:36:17.840 ' 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:17.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.840 --rc genhtml_branch_coverage=1 00:36:17.840 --rc genhtml_function_coverage=1 00:36:17.840 --rc genhtml_legend=1 00:36:17.840 --rc geninfo_all_blocks=1 00:36:17.840 --rc geninfo_unexecuted_blocks=1 00:36:17.840 00:36:17.840 ' 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:17.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.840 --rc genhtml_branch_coverage=1 00:36:17.840 --rc genhtml_function_coverage=1 00:36:17.840 --rc genhtml_legend=1 00:36:17.840 --rc geninfo_all_blocks=1 00:36:17.840 --rc geninfo_unexecuted_blocks=1 00:36:17.840 00:36:17.840 ' 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:17.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.840 --rc genhtml_branch_coverage=1 00:36:17.840 --rc genhtml_function_coverage=1 00:36:17.840 --rc genhtml_legend=1 00:36:17.840 --rc geninfo_all_blocks=1 00:36:17.840 --rc geninfo_unexecuted_blocks=1 00:36:17.840 00:36:17.840 ' 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:17.840 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:36:17.841 14:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:24.411 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:24.411 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:24.411 Found net devices under 0000:86:00.0: cvl_0_0 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:24.411 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:24.412 Found net devices under 0000:86:00.1: cvl_0_1 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:24.412 14:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:24.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:24.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:36:24.412 00:36:24.412 --- 10.0.0.2 ping statistics --- 00:36:24.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.412 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:24.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:24.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:36:24.412 00:36:24.412 --- 10.0.0.1 ping statistics --- 00:36:24.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.412 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=902281 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 902281 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 902281 ']' 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:24.412 [2024-12-05 14:08:06.192953] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:24.412 [2024-12-05 14:08:06.193918] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:36:24.412 [2024-12-05 14:08:06.193956] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:24.412 [2024-12-05 14:08:06.273445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:24.412 [2024-12-05 14:08:06.313447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:24.412 [2024-12-05 14:08:06.313488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:24.412 [2024-12-05 14:08:06.313495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:24.412 [2024-12-05 14:08:06.313500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:24.412 [2024-12-05 14:08:06.313505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:24.412 [2024-12-05 14:08:06.315151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:24.412 [2024-12-05 14:08:06.315262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:24.412 [2024-12-05 14:08:06.315376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:24.412 [2024-12-05 14:08:06.315390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:24.412 [2024-12-05 14:08:06.383084] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:24.412 [2024-12-05 14:08:06.383671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:24.412 [2024-12-05 14:08:06.383972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:24.412 [2024-12-05 14:08:06.384177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:24.412 [2024-12-05 14:08:06.384237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:24.412 [2024-12-05 14:08:06.460135] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:24.412 Malloc0 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:24.412 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:24.413 [2024-12-05 14:08:06.540377] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:24.413 { 00:36:24.413 "params": { 00:36:24.413 "name": "Nvme$subsystem", 00:36:24.413 "trtype": "$TEST_TRANSPORT", 00:36:24.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:24.413 "adrfam": "ipv4", 00:36:24.413 "trsvcid": "$NVMF_PORT", 00:36:24.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:24.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:24.413 "hdgst": ${hdgst:-false}, 00:36:24.413 "ddgst": ${ddgst:-false} 00:36:24.413 }, 00:36:24.413 "method": "bdev_nvme_attach_controller" 00:36:24.413 } 00:36:24.413 EOF 00:36:24.413 )") 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:36:24.413 14:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:24.413 "params": { 00:36:24.413 "name": "Nvme1", 00:36:24.413 "trtype": "tcp", 00:36:24.413 "traddr": "10.0.0.2", 00:36:24.413 "adrfam": "ipv4", 00:36:24.413 "trsvcid": "4420", 00:36:24.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:24.413 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:24.413 "hdgst": false, 00:36:24.413 "ddgst": false 00:36:24.413 }, 00:36:24.413 "method": "bdev_nvme_attach_controller" 00:36:24.413 }' 00:36:24.413 [2024-12-05 14:08:06.593890] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:36:24.413 [2024-12-05 14:08:06.593939] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902413 ] 00:36:24.413 [2024-12-05 14:08:06.668684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:24.413 [2024-12-05 14:08:06.712140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.413 [2024-12-05 14:08:06.712248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.413 [2024-12-05 14:08:06.712249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:24.413 I/O targets: 00:36:24.413 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:24.413 00:36:24.413 00:36:24.413 CUnit - A unit testing framework for C - Version 2.1-3 00:36:24.413 http://cunit.sourceforge.net/ 00:36:24.413 00:36:24.413 00:36:24.413 Suite: bdevio tests on: Nvme1n1 00:36:24.413 Test: blockdev write read block ...passed 00:36:24.413 Test: blockdev write zeroes read block ...passed 00:36:24.413 Test: blockdev write zeroes read no split ...passed 00:36:24.671 Test: blockdev write zeroes read split ...passed 00:36:24.671 Test: blockdev write zeroes read split partial ...passed 00:36:24.671 Test: blockdev reset ...[2024-12-05 14:08:07.093849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:36:24.671 [2024-12-05 14:08:07.093908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae9350 (9): Bad file descriptor 00:36:24.671 [2024-12-05 14:08:07.186515] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:36:24.671 passed 00:36:24.671 Test: blockdev write read 8 blocks ...passed 00:36:24.671 Test: blockdev write read size > 128k ...passed 00:36:24.671 Test: blockdev write read invalid size ...passed 00:36:24.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:24.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:24.930 Test: blockdev write read max offset ...passed 00:36:24.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:24.930 Test: blockdev writev readv 8 blocks ...passed 00:36:24.930 Test: blockdev writev readv 30 x 1block ...passed 00:36:24.930 Test: blockdev writev readv block ...passed 00:36:24.930 Test: blockdev writev readv size > 128k ...passed 00:36:24.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:24.930 Test: blockdev comparev and writev ...[2024-12-05 14:08:07.477237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.930 [2024-12-05 14:08:07.477264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.930 [2024-12-05 14:08:07.477278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.930 [2024-12-05 14:08:07.477287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:24.930 [2024-12-05 14:08:07.477579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.930 [2024-12-05 14:08:07.477590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:24.930 [2024-12-05 14:08:07.477602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.930 [2024-12-05 14:08:07.477609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:24.930 [2024-12-05 14:08:07.477893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.930 [2024-12-05 14:08:07.477903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:24.930 [2024-12-05 14:08:07.477915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.930 [2024-12-05 14:08:07.477927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:24.930 [2024-12-05 14:08:07.478206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.930 [2024-12-05 14:08:07.478217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:24.930 [2024-12-05 14:08:07.478228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.930 [2024-12-05 14:08:07.478235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:25.258 passed 00:36:25.258 Test: blockdev nvme passthru rw ...passed 00:36:25.258 Test: blockdev nvme passthru vendor specific ...[2024-12-05 14:08:07.559623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:25.258 [2024-12-05 14:08:07.559642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:25.258 [2024-12-05 14:08:07.559750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:25.258 [2024-12-05 14:08:07.559760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:25.258 [2024-12-05 14:08:07.559867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:25.258 [2024-12-05 14:08:07.559876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:25.258 [2024-12-05 14:08:07.559978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:25.258 [2024-12-05 14:08:07.559987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:25.258 passed 00:36:25.258 Test: blockdev nvme admin passthru ...passed 00:36:25.258 Test: blockdev copy ...passed 00:36:25.258 00:36:25.258 Run Summary: Type Total Ran Passed Failed Inactive 00:36:25.258 suites 1 1 n/a 0 0 00:36:25.258 tests 23 23 23 0 0 00:36:25.258 asserts 152 152 152 0 n/a 00:36:25.258 00:36:25.258 Elapsed time = 1.430 seconds 00:36:25.258 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:25.258 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.258 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:25.258 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.258 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:25.258 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:25.258 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:25.258 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:36:25.258 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:25.258 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:36:25.258 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:25.258 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:25.258 rmmod nvme_tcp 00:36:25.258 rmmod nvme_fabrics 00:36:25.258 rmmod nvme_keyring 00:36:25.258 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 902281 ']' 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 902281 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 902281 ']' 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 902281 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 902281 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 902281' 00:36:25.604 killing process with pid 902281 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 902281 00:36:25.604 14:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 902281 00:36:25.604 14:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:25.604 14:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:25.604 14:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:25.604 14:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:36:25.604 14:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:36:25.604 14:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:25.604 14:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:36:25.604 14:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:25.604 14:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:25.604 14:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:25.604 14:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:25.604 14:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.141 14:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:28.141 00:36:28.141 real 0m10.139s 00:36:28.141 user 0m9.637s 00:36:28.141 sys 0m5.219s 00:36:28.141 14:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:28.141 14:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:28.141 ************************************ 00:36:28.141 END TEST nvmf_bdevio 00:36:28.141 ************************************ 00:36:28.141 14:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:28.141 00:36:28.141 real 4m33.165s 00:36:28.141 user 9m9.422s 00:36:28.141 sys 1m51.096s 00:36:28.141 14:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:28.141 14:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:28.141 ************************************ 00:36:28.141 END TEST nvmf_target_core_interrupt_mode 00:36:28.141 ************************************ 00:36:28.141 14:08:10 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:28.141 14:08:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:28.141 14:08:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:28.141 14:08:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:28.141 ************************************ 00:36:28.141 START TEST nvmf_interrupt 00:36:28.141 ************************************ 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:28.141 * Looking for test storage... 00:36:28.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:28.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.141 --rc genhtml_branch_coverage=1 00:36:28.141 --rc genhtml_function_coverage=1 00:36:28.141 --rc genhtml_legend=1 00:36:28.141 --rc geninfo_all_blocks=1 00:36:28.141 --rc geninfo_unexecuted_blocks=1 00:36:28.141 00:36:28.141 ' 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:28.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.141 --rc genhtml_branch_coverage=1 00:36:28.141 --rc genhtml_function_coverage=1 00:36:28.141 --rc genhtml_legend=1 00:36:28.141 --rc geninfo_all_blocks=1 00:36:28.141 --rc geninfo_unexecuted_blocks=1 00:36:28.141 00:36:28.141 ' 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:28.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.141 --rc genhtml_branch_coverage=1 00:36:28.141 --rc genhtml_function_coverage=1 00:36:28.141 --rc genhtml_legend=1 00:36:28.141 --rc geninfo_all_blocks=1 00:36:28.141 --rc geninfo_unexecuted_blocks=1 00:36:28.141 00:36:28.141 ' 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:28.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.141 --rc genhtml_branch_coverage=1 00:36:28.141 --rc genhtml_function_coverage=1 00:36:28.141 --rc genhtml_legend=1 00:36:28.141 --rc geninfo_all_blocks=1 00:36:28.141 --rc geninfo_unexecuted_blocks=1 00:36:28.141 00:36:28.141 ' 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:28.141 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:36:28.142 14:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:33.434 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:33.434 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:36:33.434 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:33.434 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:33.434 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:33.434 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:33.693 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:33.693 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:36:33.693 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:33.693 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:36:33.693 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:36:33.693 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:36:33.693 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:36:33.693 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:36:33.693 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:36:33.693 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:33.693 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:33.694 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:33.694 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:33.694 Found net devices under 0000:86:00.0: cvl_0_0 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:33.694 Found net devices under 0000:86:00.1: cvl_0_1 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:33.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:33.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:36:33.694 00:36:33.694 --- 10.0.0.2 ping statistics --- 00:36:33.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.694 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:36:33.694 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:33.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:33.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:36:33.954 00:36:33.954 --- 10.0.0.1 ping statistics --- 00:36:33.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.954 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=906052 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 906052 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 906052 ']' 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:33.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:33.954 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:33.954 [2024-12-05 14:08:16.377768] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:33.954 [2024-12-05 14:08:16.378731] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:36:33.954 [2024-12-05 14:08:16.378764] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:33.954 [2024-12-05 14:08:16.442417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:33.954 [2024-12-05 14:08:16.484543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:33.954 [2024-12-05 14:08:16.484575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:33.954 [2024-12-05 14:08:16.484582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:33.954 [2024-12-05 14:08:16.484588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:33.954 [2024-12-05 14:08:16.484593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:33.954 [2024-12-05 14:08:16.487387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:33.954 [2024-12-05 14:08:16.487391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.214 [2024-12-05 14:08:16.554847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:34.214 [2024-12-05 14:08:16.555364] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:34.214 [2024-12-05 14:08:16.555390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:36:34.214 5000+0 records in 00:36:34.214 5000+0 records out 00:36:34.214 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0184319 s, 556 MB/s 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:34.214 AIO0 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:34.214 [2024-12-05 14:08:16.684127] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:34.214 [2024-12-05 14:08:16.720353] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 906052 0 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 906052 0 idle 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=906052 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 906052 -w 256 00:36:34.214 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:34.473 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 906052 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.23 reactor_0' 00:36:34.473 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 906052 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.23 reactor_0 00:36:34.473 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:34.473 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:34.473 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:34.473 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:34.473 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:34.473 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:34.473 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:34.473 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:34.473 14:08:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:34.473 14:08:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 906052 1 00:36:34.474 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 906052 1 idle 00:36:34.474 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=906052 00:36:34.474 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:34.474 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:34.474 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:34.474 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:34.474 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:34.474 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:34.474 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:34.474 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:34.474 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:34.474 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 906052 -w 256 00:36:34.474 14:08:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 906093 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 906093 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=906225 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 906052 0 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 906052 0 busy 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=906052 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 906052 -w 256 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 906052 root 20 0 128.2g 46848 33792 R 66.7 0.0 0:00.34 reactor_0' 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 906052 root 20 0 128.2g 46848 33792 R 66.7 0.0 0:00.34 reactor_0 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 906052 1 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 906052 1 busy 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=906052 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 906052 -w 256 00:36:34.733 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:34.992 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 906093 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.22 reactor_1' 00:36:34.992 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 906093 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.22 reactor_1 00:36:34.992 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:34.992 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:34.992 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:34.992 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:34.992 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:34.992 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:34.992 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:34.992 14:08:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:34.992 14:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 906225 00:36:44.968 Initializing NVMe Controllers 00:36:44.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:44.968 Controller IO queue size 256, less than required. 00:36:44.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:44.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:44.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:44.969 Initialization complete. Launching workers. 00:36:44.969 ======================================================== 00:36:44.969 Latency(us) 00:36:44.969 Device Information : IOPS MiB/s Average min max 00:36:44.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16343.40 63.84 15670.37 3477.28 32011.74 00:36:44.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16611.30 64.89 15414.62 7872.07 28831.08 00:36:44.969 ======================================================== 00:36:44.969 Total : 32954.70 128.73 15541.46 3477.28 32011.74 00:36:44.969 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 906052 0 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 906052 0 idle 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=906052 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 906052 -w 256 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 906052 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.22 reactor_0' 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 906052 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.22 reactor_0 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 906052 1 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 906052 1 idle 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=906052 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 906052 -w 256 00:36:44.969 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:45.228 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 906093 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:36:45.228 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 906093 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:36:45.228 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:45.228 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:45.228 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:45.228 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:45.228 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:45.228 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:45.228 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:45.228 14:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:45.228 14:08:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:45.488 14:08:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:45.488 14:08:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:36:45.488 14:08:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:45.488 14:08:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:45.488 14:08:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:36:48.023 14:08:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:48.023 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:48.023 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:48.023 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:48.023 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:48.023 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:36:48.023 14:08:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:48.023 14:08:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 906052 0 00:36:48.023 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 906052 0 idle 00:36:48.023 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=906052 00:36:48.023 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 906052 -w 256 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 906052 root 20 0 128.2g 72960 33792 S 6.7 0.0 0:20.46 reactor_0' 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 906052 root 20 0 128.2g 72960 33792 S 6.7 0.0 0:20.46 reactor_0 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 906052 1 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 906052 1 idle 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=906052 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 906052 -w 256 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 906093 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1' 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 906093 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:48.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:48.024 rmmod nvme_tcp 00:36:48.024 rmmod nvme_fabrics 00:36:48.024 rmmod nvme_keyring 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:48.024 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:48.282 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 906052 ']' 00:36:48.282 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 906052 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 906052 ']' 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 906052 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 906052 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 906052' 00:36:48.283 killing process with pid 906052 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 906052 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 906052 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:36:48.283 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:48.541 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:48.541 14:08:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.541 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:48.541 14:08:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:50.446 14:08:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:50.446 00:36:50.446 real 0m22.686s 00:36:50.446 user 0m39.615s 00:36:50.446 sys 0m8.368s 00:36:50.446 14:08:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:50.446 14:08:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:50.446 ************************************ 00:36:50.446 END TEST nvmf_interrupt 00:36:50.446 ************************************ 00:36:50.446 00:36:50.446 real 27m26.248s 00:36:50.446 user 56m35.739s 00:36:50.446 sys 9m15.855s 00:36:50.446 14:08:32 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:50.446 14:08:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:50.446 ************************************ 00:36:50.446 END TEST nvmf_tcp 00:36:50.446 ************************************ 00:36:50.446 14:08:33 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:36:50.446 14:08:33 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:50.446 14:08:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:50.446 14:08:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:50.446 14:08:33 -- common/autotest_common.sh@10 -- # set +x 00:36:50.706 ************************************ 00:36:50.706 START TEST spdkcli_nvmf_tcp 00:36:50.706 ************************************ 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:50.706 * Looking for test storage... 00:36:50.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:50.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.706 --rc genhtml_branch_coverage=1 00:36:50.706 --rc genhtml_function_coverage=1 00:36:50.706 --rc genhtml_legend=1 00:36:50.706 --rc geninfo_all_blocks=1 00:36:50.706 --rc geninfo_unexecuted_blocks=1 00:36:50.706 00:36:50.706 ' 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:50.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.706 --rc genhtml_branch_coverage=1 00:36:50.706 --rc genhtml_function_coverage=1 00:36:50.706 --rc genhtml_legend=1 00:36:50.706 --rc geninfo_all_blocks=1 00:36:50.706 --rc geninfo_unexecuted_blocks=1 00:36:50.706 00:36:50.706 ' 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:50.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.706 --rc genhtml_branch_coverage=1 00:36:50.706 --rc genhtml_function_coverage=1 00:36:50.706 --rc genhtml_legend=1 00:36:50.706 --rc geninfo_all_blocks=1 00:36:50.706 --rc geninfo_unexecuted_blocks=1 00:36:50.706 00:36:50.706 ' 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:50.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.706 --rc genhtml_branch_coverage=1 00:36:50.706 --rc genhtml_function_coverage=1 00:36:50.706 --rc genhtml_legend=1 00:36:50.706 --rc geninfo_all_blocks=1 00:36:50.706 --rc geninfo_unexecuted_blocks=1 00:36:50.706 00:36:50.706 ' 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:50.706 14:08:33 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:50.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=908912 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 908912 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 908912 ']' 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:50.707 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:50.965 [2024-12-05 14:08:33.305259] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:36:50.966 [2024-12-05 14:08:33.305305] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908912 ] 00:36:50.966 [2024-12-05 14:08:33.379019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:50.966 [2024-12-05 14:08:33.422145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.966 [2024-12-05 14:08:33.422148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.966 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:50.966 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:36:50.966 14:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:50.966 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:50.966 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:50.966 14:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:50.966 14:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:50.966 14:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:50.966 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:50.966 14:08:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.224 14:08:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:51.224 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:51.224 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:51.224 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:51.224 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:51.224 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:51.224 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:51.224 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:51.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:51.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:51.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:51.224 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:51.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:51.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:51.224 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:51.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:51.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:51.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:51.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:51.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:51.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:51.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:51.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:51.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:51.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:51.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:51.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:51.225 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:51.225 ' 00:36:53.755 [2024-12-05 14:08:36.250013] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:55.128 [2024-12-05 14:08:37.586476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:57.658 [2024-12-05 14:08:40.070101] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:00.193 [2024-12-05 14:08:42.224665] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:01.572 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:01.572 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:01.572 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:01.572 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:01.572 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:01.572 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:01.572 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:01.572 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:01.572 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:01.572 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:01.572 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:01.572 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:01.572 14:08:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:01.572 14:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:01.572 14:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:01.572 14:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:01.572 14:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:01.572 14:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:01.572 14:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:01.572 14:08:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:02.140 14:08:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:02.141 14:08:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:02.141 14:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:02.141 14:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:02.141 14:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:02.141 14:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:02.141 14:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:02.141 14:08:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:02.141 14:08:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:02.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:02.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:02.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:02.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:02.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:02.141 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:02.141 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:02.141 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:02.141 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:02.141 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:02.141 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:02.141 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:02.141 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:02.141 ' 00:37:07.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:07.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:07.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:07.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:07.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:07.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:07.413 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:07.413 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:07.413 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:07.413 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:07.413 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:07.413 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:07.413 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:07.413 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 908912 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 908912 ']' 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 908912 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 908912 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 908912' 00:37:07.672 killing process with pid 908912 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 908912 00:37:07.672 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 908912 00:37:07.938 14:08:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:07.938 14:08:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:07.938 14:08:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 908912 ']' 00:37:07.938 14:08:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 908912 00:37:07.938 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 908912 ']' 00:37:07.938 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 908912 00:37:07.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (908912) - No such process 00:37:07.938 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 908912 is not found' 00:37:07.938 Process with pid 908912 is not found 00:37:07.938 14:08:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:07.938 14:08:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:07.938 14:08:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:07.938 00:37:07.938 real 0m17.313s 00:37:07.938 user 0m38.096s 00:37:07.938 sys 0m0.810s 00:37:07.938 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:07.938 14:08:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:07.938 ************************************ 00:37:07.938 END TEST spdkcli_nvmf_tcp 00:37:07.939 ************************************ 00:37:07.939 14:08:50 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:07.939 14:08:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:07.939 14:08:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:07.939 14:08:50 -- common/autotest_common.sh@10 -- # set +x 00:37:07.939 ************************************ 00:37:07.939 START TEST nvmf_identify_passthru 00:37:07.939 ************************************ 00:37:07.939 14:08:50 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:07.939 * Looking for test storage... 00:37:07.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:07.939 14:08:50 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:07.939 14:08:50 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:37:07.939 14:08:50 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:08.274 14:08:50 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:08.274 14:08:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:08.275 14:08:50 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:08.275 14:08:50 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:08.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.275 --rc genhtml_branch_coverage=1 00:37:08.275 --rc genhtml_function_coverage=1 00:37:08.275 --rc genhtml_legend=1 00:37:08.275 --rc geninfo_all_blocks=1 00:37:08.275 --rc geninfo_unexecuted_blocks=1 00:37:08.275 00:37:08.275 ' 00:37:08.275 14:08:50 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:08.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.275 --rc genhtml_branch_coverage=1 00:37:08.275 --rc genhtml_function_coverage=1 00:37:08.275 --rc genhtml_legend=1 00:37:08.275 --rc geninfo_all_blocks=1 00:37:08.275 --rc geninfo_unexecuted_blocks=1 00:37:08.275 00:37:08.275 ' 00:37:08.275 14:08:50 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:08.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.275 --rc genhtml_branch_coverage=1 00:37:08.275 --rc genhtml_function_coverage=1 00:37:08.275 --rc genhtml_legend=1 00:37:08.275 --rc geninfo_all_blocks=1 00:37:08.275 --rc geninfo_unexecuted_blocks=1 00:37:08.275 00:37:08.275 ' 00:37:08.275 14:08:50 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:08.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.275 --rc genhtml_branch_coverage=1 00:37:08.275 --rc genhtml_function_coverage=1 00:37:08.275 --rc genhtml_legend=1 00:37:08.275 --rc geninfo_all_blocks=1 00:37:08.275 --rc geninfo_unexecuted_blocks=1 00:37:08.275 00:37:08.275 ' 00:37:08.275 14:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:08.275 14:08:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.275 14:08:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.275 14:08:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.275 14:08:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:08.275 14:08:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:08.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:08.275 14:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:08.275 14:08:50 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:08.275 14:08:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.275 14:08:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.275 14:08:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.275 14:08:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:08.275 14:08:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.275 14:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.275 14:08:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:08.275 14:08:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:08.275 14:08:50 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:37:08.275 14:08:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:14.849 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:14.849 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:14.849 Found net devices under 0000:86:00.0: cvl_0_0 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:14.849 Found net devices under 0000:86:00.1: cvl_0_1 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:37:14.849 00:37:14.849 --- 10.0.0.2 ping statistics --- 00:37:14.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.849 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:37:14.849 00:37:14.849 --- 10.0.0.1 ping statistics --- 00:37:14.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.849 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:14.849 14:08:56 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:14.849 14:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:14.849 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.849 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:14.850 14:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:14.850 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:37:14.850 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:37:14.850 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:37:14.850 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:37:14.850 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:14.850 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:37:14.850 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:14.850 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:14.850 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:14.850 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:37:14.850 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:37:14.850 14:08:56 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:37:14.850 14:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:37:14.850 14:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:37:14.850 14:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:37:14.850 14:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:14.850 14:08:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:19.035 14:09:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:37:19.035 14:09:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:37:19.035 14:09:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:19.035 14:09:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:24.304 14:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:37:24.304 14:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:24.304 14:09:06 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:24.304 14:09:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:24.304 14:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:24.304 14:09:06 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:24.304 14:09:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:24.304 14:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=916521 00:37:24.304 14:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:24.304 14:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:24.305 14:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 916521 00:37:24.305 14:09:06 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 916521 ']' 00:37:24.305 14:09:06 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.305 14:09:06 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:24.305 14:09:06 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.305 14:09:06 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:24.305 14:09:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:24.305 [2024-12-05 14:09:06.256078] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:37:24.305 [2024-12-05 14:09:06.256125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:24.305 [2024-12-05 14:09:06.337305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:24.305 [2024-12-05 14:09:06.379906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:24.305 [2024-12-05 14:09:06.379943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:24.305 [2024-12-05 14:09:06.379950] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:24.305 [2024-12-05 14:09:06.379956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:24.305 [2024-12-05 14:09:06.379961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:24.305 [2024-12-05 14:09:06.381400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.305 [2024-12-05 14:09:06.381510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:24.305 [2024-12-05 14:09:06.381617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.305 [2024-12-05 14:09:06.381618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:24.564 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:24.564 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:37:24.564 14:09:07 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:24.564 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.564 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:24.564 INFO: Log level set to 20 00:37:24.564 INFO: Requests: 00:37:24.564 { 00:37:24.564 "jsonrpc": "2.0", 00:37:24.564 "method": "nvmf_set_config", 00:37:24.564 "id": 1, 00:37:24.564 "params": { 00:37:24.564 "admin_cmd_passthru": { 00:37:24.564 "identify_ctrlr": true 00:37:24.564 } 00:37:24.564 } 00:37:24.564 } 00:37:24.564 00:37:24.564 INFO: response: 00:37:24.564 { 00:37:24.564 "jsonrpc": "2.0", 00:37:24.564 "id": 1, 00:37:24.564 "result": true 00:37:24.564 } 00:37:24.564 00:37:24.564 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.564 14:09:07 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:24.564 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.564 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:24.564 INFO: Setting log level to 20 00:37:24.564 INFO: Setting log level to 20 00:37:24.564 INFO: Log level set to 20 00:37:24.564 INFO: Log level set to 20 00:37:24.564 INFO: Requests: 00:37:24.564 { 00:37:24.564 "jsonrpc": "2.0", 00:37:24.564 "method": "framework_start_init", 00:37:24.564 "id": 1 00:37:24.564 } 00:37:24.564 00:37:24.564 INFO: Requests: 00:37:24.564 { 00:37:24.564 "jsonrpc": "2.0", 00:37:24.564 "method": "framework_start_init", 00:37:24.564 "id": 1 00:37:24.564 } 00:37:24.564 00:37:24.824 [2024-12-05 14:09:07.180511] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:24.824 INFO: response: 00:37:24.824 { 00:37:24.824 "jsonrpc": "2.0", 00:37:24.824 "id": 1, 00:37:24.824 "result": true 00:37:24.824 } 00:37:24.824 00:37:24.824 INFO: response: 00:37:24.824 { 00:37:24.824 "jsonrpc": "2.0", 00:37:24.824 "id": 1, 00:37:24.824 "result": true 00:37:24.824 } 00:37:24.824 00:37:24.824 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.824 14:09:07 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:24.824 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.824 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:24.824 INFO: Setting log level to 40 00:37:24.824 INFO: Setting log level to 40 00:37:24.824 INFO: Setting log level to 40 00:37:24.824 [2024-12-05 14:09:07.193834] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:24.824 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.824 14:09:07 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:24.824 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:24.824 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:24.824 14:09:07 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:37:24.824 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.824 14:09:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:28.114 Nvme0n1 00:37:28.114 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:28.114 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.114 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:28.114 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:28.114 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.114 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:28.114 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:28.114 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.114 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:28.114 [2024-12-05 14:09:10.100762] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:28.114 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:28.114 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.114 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:28.114 [ 00:37:28.114 { 00:37:28.114 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:28.114 "subtype": "Discovery", 00:37:28.114 "listen_addresses": [], 00:37:28.114 "allow_any_host": true, 00:37:28.114 "hosts": [] 00:37:28.114 }, 00:37:28.114 { 00:37:28.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:28.114 "subtype": "NVMe", 00:37:28.114 "listen_addresses": [ 00:37:28.114 { 00:37:28.114 "trtype": "TCP", 00:37:28.114 "adrfam": "IPv4", 00:37:28.114 "traddr": "10.0.0.2", 00:37:28.114 "trsvcid": "4420" 00:37:28.114 } 00:37:28.114 ], 00:37:28.114 "allow_any_host": true, 00:37:28.114 "hosts": [], 00:37:28.114 "serial_number": "SPDK00000000000001", 00:37:28.114 "model_number": "SPDK bdev Controller", 00:37:28.114 "max_namespaces": 1, 00:37:28.114 "min_cntlid": 1, 00:37:28.114 "max_cntlid": 65519, 00:37:28.114 "namespaces": [ 00:37:28.114 { 00:37:28.114 "nsid": 1, 00:37:28.114 "bdev_name": "Nvme0n1", 00:37:28.114 "name": "Nvme0n1", 00:37:28.114 "nguid": "FBA7DBA5AC244EA6A8FA3C0C054C603C", 00:37:28.114 "uuid": "fba7dba5-ac24-4ea6-a8fa-3c0c054c603c" 00:37:28.114 } 00:37:28.114 ] 00:37:28.114 } 00:37:28.114 ] 00:37:28.114 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:37:28.114 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:37:28.115 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:28.115 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.115 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:28.115 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.115 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:28.115 14:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:28.115 14:09:10 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:28.115 14:09:10 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:28.115 14:09:10 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:28.115 14:09:10 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:28.115 14:09:10 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:28.115 14:09:10 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:28.115 rmmod nvme_tcp 00:37:28.115 rmmod nvme_fabrics 00:37:28.115 rmmod nvme_keyring 00:37:28.373 14:09:10 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:28.373 14:09:10 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:28.373 14:09:10 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:28.373 14:09:10 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 916521 ']' 00:37:28.373 14:09:10 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 916521 00:37:28.373 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 916521 ']' 00:37:28.373 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 916521 00:37:28.373 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:37:28.373 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:28.373 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 916521 00:37:28.373 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:28.373 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:28.373 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 916521' 00:37:28.373 killing process with pid 916521 00:37:28.373 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 916521 00:37:28.373 14:09:10 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 916521 00:37:30.279 14:09:12 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:30.279 14:09:12 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:30.279 14:09:12 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:30.279 14:09:12 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:37:30.279 14:09:12 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:37:30.279 14:09:12 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:30.279 14:09:12 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:37:30.279 14:09:12 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:30.279 14:09:12 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:30.279 14:09:12 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:30.279 14:09:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:30.279 14:09:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:32.821 14:09:14 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:32.821 00:37:32.821 real 0m24.397s 00:37:32.821 user 0m33.294s 00:37:32.821 sys 0m6.403s 00:37:32.821 14:09:14 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:32.821 14:09:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:32.821 ************************************ 00:37:32.821 END TEST nvmf_identify_passthru 00:37:32.821 ************************************ 00:37:32.821 14:09:14 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:32.821 14:09:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:32.821 14:09:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:32.821 14:09:14 -- common/autotest_common.sh@10 -- # set +x 00:37:32.821 ************************************ 00:37:32.821 START TEST nvmf_dif 00:37:32.821 ************************************ 00:37:32.821 14:09:14 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:32.821 * Looking for test storage... 00:37:32.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:32.821 14:09:14 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:32.821 14:09:14 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:37:32.821 14:09:14 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:32.821 14:09:15 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:32.821 14:09:15 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:32.821 14:09:15 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:32.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.821 --rc genhtml_branch_coverage=1 00:37:32.821 --rc genhtml_function_coverage=1 00:37:32.821 --rc genhtml_legend=1 00:37:32.821 --rc geninfo_all_blocks=1 00:37:32.821 --rc geninfo_unexecuted_blocks=1 00:37:32.821 00:37:32.821 ' 00:37:32.821 14:09:15 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:32.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.821 --rc genhtml_branch_coverage=1 00:37:32.821 --rc genhtml_function_coverage=1 00:37:32.821 --rc genhtml_legend=1 00:37:32.821 --rc geninfo_all_blocks=1 00:37:32.821 --rc geninfo_unexecuted_blocks=1 00:37:32.821 00:37:32.821 ' 00:37:32.821 14:09:15 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:32.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.821 --rc genhtml_branch_coverage=1 00:37:32.821 --rc genhtml_function_coverage=1 00:37:32.821 --rc genhtml_legend=1 00:37:32.821 --rc geninfo_all_blocks=1 00:37:32.821 --rc geninfo_unexecuted_blocks=1 00:37:32.821 00:37:32.821 ' 00:37:32.821 14:09:15 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:32.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.821 --rc genhtml_branch_coverage=1 00:37:32.821 --rc genhtml_function_coverage=1 00:37:32.821 --rc genhtml_legend=1 00:37:32.821 --rc geninfo_all_blocks=1 00:37:32.821 --rc geninfo_unexecuted_blocks=1 00:37:32.821 00:37:32.821 ' 00:37:32.821 14:09:15 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:32.821 14:09:15 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:32.821 14:09:15 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.821 14:09:15 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.821 14:09:15 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.821 14:09:15 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:32.821 14:09:15 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:32.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:32.821 14:09:15 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:32.821 14:09:15 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:32.821 14:09:15 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:32.821 14:09:15 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:32.821 14:09:15 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:32.821 14:09:15 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:32.822 14:09:15 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:32.822 14:09:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:32.822 14:09:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:32.822 14:09:15 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:32.822 14:09:15 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:32.822 14:09:15 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:37:32.822 14:09:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:38.095 14:09:20 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:38.095 14:09:20 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:38.095 14:09:20 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:38.095 14:09:20 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:38.095 14:09:20 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:38.095 14:09:20 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:38.095 14:09:20 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:38.095 14:09:20 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:38.095 14:09:20 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:38.095 14:09:20 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:38.095 14:09:20 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:38.354 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:38.354 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:38.354 Found net devices under 0000:86:00.0: cvl_0_0 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:38.354 Found net devices under 0000:86:00.1: cvl_0_1 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:38.354 14:09:20 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:38.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:38.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:37:38.355 00:37:38.355 --- 10.0.0.2 ping statistics --- 00:37:38.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:38.355 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:38.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:38.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:37:38.355 00:37:38.355 --- 10.0.0.1 ping statistics --- 00:37:38.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:38.355 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:38.355 14:09:20 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:41.643 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:41.643 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:37:41.643 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:37:41.643 14:09:23 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:41.643 14:09:23 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:41.643 14:09:23 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:41.643 14:09:23 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:41.643 14:09:23 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:41.643 14:09:23 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:41.643 14:09:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:41.643 14:09:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:41.643 14:09:23 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:41.643 14:09:23 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:41.643 14:09:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:41.643 14:09:23 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=922614 00:37:41.643 14:09:23 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 922614 00:37:41.643 14:09:23 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:41.643 14:09:23 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 922614 ']' 00:37:41.643 14:09:23 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.643 14:09:23 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:41.643 14:09:23 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.643 14:09:23 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:41.643 14:09:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:41.643 [2024-12-05 14:09:23.900099] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:37:41.643 [2024-12-05 14:09:23.900147] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:41.643 [2024-12-05 14:09:23.977856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:41.643 [2024-12-05 14:09:24.017272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:41.643 [2024-12-05 14:09:24.017305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:41.643 [2024-12-05 14:09:24.017313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:41.643 [2024-12-05 14:09:24.017321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:41.643 [2024-12-05 14:09:24.017329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:41.643 [2024-12-05 14:09:24.017893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:41.643 14:09:24 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:41.643 14:09:24 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:37:41.643 14:09:24 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:41.643 14:09:24 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:41.643 14:09:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:41.643 14:09:24 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:41.643 14:09:24 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:41.643 14:09:24 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:41.643 14:09:24 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.643 14:09:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:41.643 [2024-12-05 14:09:24.167087] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:41.643 14:09:24 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.643 14:09:24 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:41.644 14:09:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:41.644 14:09:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:41.644 14:09:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:41.644 ************************************ 00:37:41.644 START TEST fio_dif_1_default 00:37:41.644 ************************************ 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:41.644 bdev_null0 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.644 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:41.901 [2024-12-05 14:09:24.243441] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:41.901 14:09:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:41.901 { 00:37:41.901 "params": { 00:37:41.901 "name": "Nvme$subsystem", 00:37:41.901 "trtype": "$TEST_TRANSPORT", 00:37:41.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:41.901 "adrfam": "ipv4", 00:37:41.901 "trsvcid": "$NVMF_PORT", 00:37:41.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:41.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:41.901 "hdgst": ${hdgst:-false}, 00:37:41.901 "ddgst": ${ddgst:-false} 00:37:41.901 }, 00:37:41.901 "method": "bdev_nvme_attach_controller" 00:37:41.901 } 00:37:41.901 EOF 00:37:41.901 )") 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:41.902 "params": { 00:37:41.902 "name": "Nvme0", 00:37:41.902 "trtype": "tcp", 00:37:41.902 "traddr": "10.0.0.2", 00:37:41.902 "adrfam": "ipv4", 00:37:41.902 "trsvcid": "4420", 00:37:41.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:41.902 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:41.902 "hdgst": false, 00:37:41.902 "ddgst": false 00:37:41.902 }, 00:37:41.902 "method": "bdev_nvme_attach_controller" 00:37:41.902 }' 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:41.902 14:09:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:42.158 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:42.158 fio-3.35 00:37:42.158 Starting 1 thread 00:37:54.348 00:37:54.348 filename0: (groupid=0, jobs=1): err= 0: pid=922948: Thu Dec 5 14:09:35 2024 00:37:54.348 read: IOPS=96, BW=386KiB/s (396kB/s)(3872KiB/10021msec) 00:37:54.348 slat (nsec): min=5897, max=35552, avg=6477.06, stdev=1398.36 00:37:54.348 clat (usec): min=397, max=44806, avg=41389.98, stdev=2693.51 00:37:54.348 lat (usec): min=403, max=44842, avg=41396.46, stdev=2693.57 00:37:54.348 clat percentiles (usec): 00:37:54.348 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:54.348 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:37:54.348 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:54.348 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:37:54.348 | 99.99th=[44827] 00:37:54.348 bw ( KiB/s): min= 352, max= 416, per=99.64%, avg=385.60, stdev=12.61, samples=20 00:37:54.349 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:37:54.349 lat (usec) : 500=0.41% 00:37:54.349 lat (msec) : 50=99.59% 00:37:54.349 cpu : usr=92.61%, sys=7.11%, ctx=8, majf=0, minf=0 00:37:54.349 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:54.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.349 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:54.349 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:54.349 00:37:54.349 Run status group 0 (all jobs): 00:37:54.349 READ: bw=386KiB/s (396kB/s), 386KiB/s-386KiB/s (396kB/s-396kB/s), io=3872KiB (3965kB), run=10021-10021msec 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.349 00:37:54.349 real 0m11.115s 00:37:54.349 user 0m16.286s 00:37:54.349 sys 0m1.024s 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:54.349 ************************************ 00:37:54.349 END TEST fio_dif_1_default 00:37:54.349 ************************************ 00:37:54.349 14:09:35 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:54.349 14:09:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:54.349 14:09:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:54.349 14:09:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:54.349 ************************************ 00:37:54.349 START TEST fio_dif_1_multi_subsystems 00:37:54.349 ************************************ 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:54.349 bdev_null0 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:54.349 [2024-12-05 14:09:35.430889] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:54.349 bdev_null1 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:54.349 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:54.350 { 00:37:54.350 "params": { 00:37:54.350 "name": "Nvme$subsystem", 00:37:54.350 "trtype": "$TEST_TRANSPORT", 00:37:54.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:54.350 "adrfam": "ipv4", 00:37:54.350 "trsvcid": "$NVMF_PORT", 00:37:54.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:54.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:54.350 "hdgst": ${hdgst:-false}, 00:37:54.350 "ddgst": ${ddgst:-false} 00:37:54.350 }, 00:37:54.350 "method": "bdev_nvme_attach_controller" 00:37:54.350 } 00:37:54.350 EOF 00:37:54.350 )") 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:54.350 { 00:37:54.350 "params": { 00:37:54.350 "name": "Nvme$subsystem", 00:37:54.350 "trtype": "$TEST_TRANSPORT", 00:37:54.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:54.350 "adrfam": "ipv4", 00:37:54.350 "trsvcid": "$NVMF_PORT", 00:37:54.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:54.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:54.350 "hdgst": ${hdgst:-false}, 00:37:54.350 "ddgst": ${ddgst:-false} 00:37:54.350 }, 00:37:54.350 "method": "bdev_nvme_attach_controller" 00:37:54.350 } 00:37:54.350 EOF 00:37:54.350 )") 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:54.350 "params": { 00:37:54.350 "name": "Nvme0", 00:37:54.350 "trtype": "tcp", 00:37:54.350 "traddr": "10.0.0.2", 00:37:54.350 "adrfam": "ipv4", 00:37:54.350 "trsvcid": "4420", 00:37:54.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:54.350 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:54.350 "hdgst": false, 00:37:54.350 "ddgst": false 00:37:54.350 }, 00:37:54.350 "method": "bdev_nvme_attach_controller" 00:37:54.350 },{ 00:37:54.350 "params": { 00:37:54.350 "name": "Nvme1", 00:37:54.350 "trtype": "tcp", 00:37:54.350 "traddr": "10.0.0.2", 00:37:54.350 "adrfam": "ipv4", 00:37:54.350 "trsvcid": "4420", 00:37:54.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:54.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:54.350 "hdgst": false, 00:37:54.350 "ddgst": false 00:37:54.350 }, 00:37:54.350 "method": "bdev_nvme_attach_controller" 00:37:54.350 }' 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:54.350 14:09:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:54.350 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:54.350 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:54.350 fio-3.35 00:37:54.350 Starting 2 threads 00:38:04.316 00:38:04.316 filename0: (groupid=0, jobs=1): err= 0: pid=924811: Thu Dec 5 14:09:46 2024 00:38:04.316 read: IOPS=195, BW=780KiB/s (799kB/s)(7824KiB/10030msec) 00:38:04.316 slat (nsec): min=5928, max=29786, avg=7144.88, stdev=2205.09 00:38:04.316 clat (usec): min=383, max=42568, avg=20489.02, stdev=20456.05 00:38:04.316 lat (usec): min=389, max=42574, avg=20496.16, stdev=20455.40 00:38:04.316 clat percentiles (usec): 00:38:04.316 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 449], 00:38:04.316 | 30.00th=[ 461], 40.00th=[ 578], 50.00th=[ 971], 60.00th=[41157], 00:38:04.316 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:38:04.316 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:38:04.316 | 99.99th=[42730] 00:38:04.316 bw ( KiB/s): min= 704, max= 896, per=50.31%, avg=780.80, stdev=39.40, samples=20 00:38:04.316 iops : min= 176, max= 224, avg=195.20, stdev= 9.85, samples=20 00:38:04.316 lat (usec) : 500=36.35%, 750=13.14%, 1000=0.97% 00:38:04.316 lat (msec) : 2=0.66%, 50=48.88% 00:38:04.316 cpu : usr=96.80%, sys=2.95%, ctx=13, majf=0, minf=111 00:38:04.316 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:04.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.316 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:04.316 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:04.316 filename1: (groupid=0, jobs=1): err= 0: pid=924812: Thu Dec 5 14:09:46 2024 00:38:04.316 read: IOPS=192, BW=770KiB/s (789kB/s)(7728KiB/10031msec) 00:38:04.316 slat (nsec): min=5929, max=28157, avg=7090.71, stdev=1954.43 00:38:04.316 clat (usec): min=383, max=42563, avg=20746.06, stdev=20425.39 00:38:04.316 lat (usec): min=389, max=42570, avg=20753.15, stdev=20424.79 00:38:04.316 clat percentiles (usec): 00:38:04.316 | 1.00th=[ 396], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 416], 00:38:04.316 | 30.00th=[ 424], 40.00th=[ 529], 50.00th=[ 652], 60.00th=[40633], 00:38:04.316 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:38:04.316 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:38:04.316 | 99.99th=[42730] 00:38:04.316 bw ( KiB/s): min= 704, max= 832, per=49.73%, avg=771.20, stdev=25.22, samples=20 00:38:04.316 iops : min= 176, max= 208, avg=192.80, stdev= 6.30, samples=20 00:38:04.316 lat (usec) : 500=39.39%, 750=10.92% 00:38:04.316 lat (msec) : 50=49.69% 00:38:04.316 cpu : usr=96.42%, sys=3.33%, ctx=16, majf=0, minf=152 00:38:04.316 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:04.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.316 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:04.316 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:04.316 00:38:04.316 Run status group 0 (all jobs): 00:38:04.316 READ: bw=1550KiB/s (1588kB/s), 770KiB/s-780KiB/s (789kB/s-799kB/s), io=15.2MiB (15.9MB), run=10030-10031msec 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.316 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:04.317 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:04.317 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:04.317 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:04.317 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.317 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:04.317 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.317 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:04.317 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.317 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:04.317 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.317 00:38:04.317 real 0m11.325s 00:38:04.317 user 0m26.692s 00:38:04.317 sys 0m0.954s 00:38:04.317 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:04.317 14:09:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:04.317 ************************************ 00:38:04.317 END TEST fio_dif_1_multi_subsystems 00:38:04.317 ************************************ 00:38:04.317 14:09:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:04.317 14:09:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:04.317 14:09:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:04.317 14:09:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:04.317 ************************************ 00:38:04.317 START TEST fio_dif_rand_params 00:38:04.317 ************************************ 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.317 bdev_null0 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.317 [2024-12-05 14:09:46.827657] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:04.317 { 00:38:04.317 "params": { 00:38:04.317 "name": "Nvme$subsystem", 00:38:04.317 "trtype": "$TEST_TRANSPORT", 00:38:04.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:04.317 "adrfam": "ipv4", 00:38:04.317 "trsvcid": "$NVMF_PORT", 00:38:04.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:04.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:04.317 "hdgst": ${hdgst:-false}, 00:38:04.317 "ddgst": ${ddgst:-false} 00:38:04.317 }, 00:38:04.317 "method": "bdev_nvme_attach_controller" 00:38:04.317 } 00:38:04.317 EOF 00:38:04.317 )") 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:04.317 "params": { 00:38:04.317 "name": "Nvme0", 00:38:04.317 "trtype": "tcp", 00:38:04.317 "traddr": "10.0.0.2", 00:38:04.317 "adrfam": "ipv4", 00:38:04.317 "trsvcid": "4420", 00:38:04.317 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:04.317 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:04.317 "hdgst": false, 00:38:04.317 "ddgst": false 00:38:04.317 }, 00:38:04.317 "method": "bdev_nvme_attach_controller" 00:38:04.317 }' 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:04.317 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:04.588 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:04.588 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:04.588 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:04.588 14:09:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:04.846 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:04.846 ... 00:38:04.846 fio-3.35 00:38:04.846 Starting 3 threads 00:38:11.400 00:38:11.400 filename0: (groupid=0, jobs=1): err= 0: pid=926702: Thu Dec 5 14:09:52 2024 00:38:11.400 read: IOPS=321, BW=40.2MiB/s (42.2MB/s)(201MiB/5008msec) 00:38:11.400 slat (nsec): min=6142, max=26881, avg=10736.68, stdev=2053.59 00:38:11.400 clat (usec): min=3567, max=51090, avg=9312.52, stdev=4195.42 00:38:11.400 lat (usec): min=3576, max=51101, avg=9323.26, stdev=4195.53 00:38:11.400 clat percentiles (usec): 00:38:11.400 | 1.00th=[ 3982], 5.00th=[ 6063], 10.00th=[ 6587], 20.00th=[ 7898], 00:38:11.400 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9503], 00:38:11.400 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10945], 95.00th=[11338], 00:38:11.400 | 99.00th=[12780], 99.50th=[49021], 99.90th=[50070], 99.95th=[51119], 00:38:11.400 | 99.99th=[51119] 00:38:11.400 bw ( KiB/s): min=34304, max=46848, per=34.35%, avg=41164.80, stdev=3980.23, samples=10 00:38:11.400 iops : min= 268, max= 366, avg=321.60, stdev=31.10, samples=10 00:38:11.400 lat (msec) : 4=1.06%, 10=74.92%, 20=23.09%, 50=0.74%, 100=0.19% 00:38:11.400 cpu : usr=94.49%, sys=5.21%, ctx=11, majf=0, minf=0 00:38:11.400 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:11.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.400 issued rwts: total=1611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:11.400 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:11.400 filename0: (groupid=0, jobs=1): err= 0: pid=926703: Thu Dec 5 14:09:52 2024 00:38:11.400 read: IOPS=319, BW=39.9MiB/s (41.9MB/s)(200MiB/5004msec) 00:38:11.400 slat (nsec): min=6146, max=22356, avg=10954.92, stdev=1922.08 00:38:11.400 clat (usec): min=3295, max=50570, avg=9373.96, stdev=5020.53 00:38:11.401 lat (usec): min=3302, max=50583, avg=9384.92, stdev=5020.50 00:38:11.401 clat percentiles (usec): 00:38:11.401 | 1.00th=[ 4047], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 8160], 00:38:11.401 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:38:11.401 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:38:11.401 | 99.00th=[47973], 99.50th=[49546], 99.90th=[50070], 99.95th=[50594], 00:38:11.401 | 99.99th=[50594] 00:38:11.401 bw ( KiB/s): min=30208, max=44800, per=34.12%, avg=40883.20, stdev=4648.98, samples=10 00:38:11.401 iops : min= 236, max= 350, avg=319.40, stdev=36.32, samples=10 00:38:11.401 lat (msec) : 4=0.88%, 10=86.43%, 20=11.19%, 50=1.19%, 100=0.31% 00:38:11.401 cpu : usr=93.94%, sys=5.68%, ctx=18, majf=0, minf=9 00:38:11.401 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:11.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.401 issued rwts: total=1599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:11.401 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:11.401 filename0: (groupid=0, jobs=1): err= 0: pid=926704: Thu Dec 5 14:09:52 2024 00:38:11.401 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(185MiB/5004msec) 00:38:11.401 slat (nsec): min=6229, max=27109, avg=10964.76, stdev=1988.40 00:38:11.401 clat (usec): min=3945, max=52645, avg=10143.76, stdev=5838.69 00:38:11.401 lat (usec): min=3953, max=52670, avg=10154.72, stdev=5838.73 00:38:11.401 clat percentiles (usec): 00:38:11.401 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 7111], 20.00th=[ 8848], 00:38:11.401 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:38:11.401 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[11207], 00:38:11.401 | 99.00th=[50070], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:38:11.401 | 99.99th=[52691] 00:38:11.401 bw ( KiB/s): min=25344, max=45312, per=31.51%, avg=37760.00, stdev=5281.37, samples=10 00:38:11.401 iops : min= 198, max= 354, avg=295.00, stdev=41.26, samples=10 00:38:11.401 lat (msec) : 4=0.07%, 10=70.50%, 20=27.40%, 50=1.15%, 100=0.88% 00:38:11.401 cpu : usr=94.08%, sys=5.58%, ctx=11, majf=0, minf=9 00:38:11.401 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:11.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.401 issued rwts: total=1478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:11.401 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:11.401 00:38:11.401 Run status group 0 (all jobs): 00:38:11.401 READ: bw=117MiB/s (123MB/s), 36.9MiB/s-40.2MiB/s (38.7MB/s-42.2MB/s), io=586MiB (614MB), run=5004-5008msec 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 bdev_null0 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 [2024-12-05 14:09:52.925571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 bdev_null1 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 bdev_null2 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.401 14:09:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.401 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.401 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:11.401 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:11.401 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:11.401 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:11.401 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:11.401 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:11.402 { 00:38:11.402 "params": { 00:38:11.402 "name": "Nvme$subsystem", 00:38:11.402 "trtype": "$TEST_TRANSPORT", 00:38:11.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:11.402 "adrfam": "ipv4", 00:38:11.402 "trsvcid": "$NVMF_PORT", 00:38:11.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:11.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:11.402 "hdgst": ${hdgst:-false}, 00:38:11.402 "ddgst": ${ddgst:-false} 00:38:11.402 }, 00:38:11.402 "method": "bdev_nvme_attach_controller" 00:38:11.402 } 00:38:11.402 EOF 00:38:11.402 )") 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:11.402 { 00:38:11.402 "params": { 00:38:11.402 "name": "Nvme$subsystem", 00:38:11.402 "trtype": "$TEST_TRANSPORT", 00:38:11.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:11.402 "adrfam": "ipv4", 00:38:11.402 "trsvcid": "$NVMF_PORT", 00:38:11.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:11.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:11.402 "hdgst": ${hdgst:-false}, 00:38:11.402 "ddgst": ${ddgst:-false} 00:38:11.402 }, 00:38:11.402 "method": "bdev_nvme_attach_controller" 00:38:11.402 } 00:38:11.402 EOF 00:38:11.402 )") 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:11.402 { 00:38:11.402 "params": { 00:38:11.402 "name": "Nvme$subsystem", 00:38:11.402 "trtype": "$TEST_TRANSPORT", 00:38:11.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:11.402 "adrfam": "ipv4", 00:38:11.402 "trsvcid": "$NVMF_PORT", 00:38:11.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:11.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:11.402 "hdgst": ${hdgst:-false}, 00:38:11.402 "ddgst": ${ddgst:-false} 00:38:11.402 }, 00:38:11.402 "method": "bdev_nvme_attach_controller" 00:38:11.402 } 00:38:11.402 EOF 00:38:11.402 )") 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:11.402 "params": { 00:38:11.402 "name": "Nvme0", 00:38:11.402 "trtype": "tcp", 00:38:11.402 "traddr": "10.0.0.2", 00:38:11.402 "adrfam": "ipv4", 00:38:11.402 "trsvcid": "4420", 00:38:11.402 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:11.402 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:11.402 "hdgst": false, 00:38:11.402 "ddgst": false 00:38:11.402 }, 00:38:11.402 "method": "bdev_nvme_attach_controller" 00:38:11.402 },{ 00:38:11.402 "params": { 00:38:11.402 "name": "Nvme1", 00:38:11.402 "trtype": "tcp", 00:38:11.402 "traddr": "10.0.0.2", 00:38:11.402 "adrfam": "ipv4", 00:38:11.402 "trsvcid": "4420", 00:38:11.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:11.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:11.402 "hdgst": false, 00:38:11.402 "ddgst": false 00:38:11.402 }, 00:38:11.402 "method": "bdev_nvme_attach_controller" 00:38:11.402 },{ 00:38:11.402 "params": { 00:38:11.402 "name": "Nvme2", 00:38:11.402 "trtype": "tcp", 00:38:11.402 "traddr": "10.0.0.2", 00:38:11.402 "adrfam": "ipv4", 00:38:11.402 "trsvcid": "4420", 00:38:11.402 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:11.402 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:11.402 "hdgst": false, 00:38:11.402 "ddgst": false 00:38:11.402 }, 00:38:11.402 "method": "bdev_nvme_attach_controller" 00:38:11.402 }' 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:11.402 14:09:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:11.402 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:11.402 ... 00:38:11.402 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:11.402 ... 00:38:11.402 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:11.402 ... 00:38:11.402 fio-3.35 00:38:11.402 Starting 24 threads 00:38:23.738 00:38:23.738 filename0: (groupid=0, jobs=1): err= 0: pid=927962: Thu Dec 5 14:10:04 2024 00:38:23.739 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10008msec) 00:38:23.739 slat (usec): min=11, max=107, avg=50.45, stdev=21.31 00:38:23.739 clat (usec): min=8959, max=31515, avg=29752.20, stdev=2026.36 00:38:23.739 lat (usec): min=9007, max=31563, avg=29802.64, stdev=2029.35 00:38:23.739 clat percentiles (usec): 00:38:23.739 | 1.00th=[16581], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:38:23.739 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:38:23.739 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.739 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:38:23.739 | 99.99th=[31589] 00:38:23.739 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2112.00, stdev=77.69, samples=20 00:38:23.739 iops : min= 512, max= 576, avg=528.00, stdev=19.42, samples=20 00:38:23.739 lat (msec) : 10=0.30%, 20=0.91%, 50=98.79% 00:38:23.739 cpu : usr=98.70%, sys=0.90%, ctx=14, majf=0, minf=9 00:38:23.739 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:23.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.739 filename0: (groupid=0, jobs=1): err= 0: pid=927963: Thu Dec 5 14:10:04 2024 00:38:23.739 read: IOPS=524, BW=2098KiB/s (2149kB/s)(20.5MiB/10004msec) 00:38:23.739 slat (usec): min=7, max=100, avg=37.10, stdev=17.46 00:38:23.739 clat (usec): min=16468, max=64976, avg=30133.09, stdev=2054.92 00:38:23.739 lat (usec): min=16490, max=64990, avg=30170.19, stdev=2054.66 00:38:23.739 clat percentiles (usec): 00:38:23.739 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:38:23.739 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:38:23.739 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.739 | 99.00th=[31327], 99.50th=[31327], 99.90th=[62653], 99.95th=[62653], 00:38:23.739 | 99.99th=[64750] 00:38:23.739 bw ( KiB/s): min= 1923, max= 2176, per=4.13%, avg=2088.58, stdev=74.17, samples=19 00:38:23.739 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:38:23.739 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:38:23.739 cpu : usr=98.85%, sys=0.74%, ctx=12, majf=0, minf=9 00:38:23.739 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:23.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.739 filename0: (groupid=0, jobs=1): err= 0: pid=927964: Thu Dec 5 14:10:04 2024 00:38:23.739 read: IOPS=525, BW=2101KiB/s (2151kB/s)(20.6MiB/10018msec) 00:38:23.739 slat (usec): min=7, max=105, avg=38.25, stdev=17.28 00:38:23.739 clat (usec): min=17324, max=45662, avg=30091.00, stdev=1225.98 00:38:23.739 lat (usec): min=17349, max=45676, avg=30129.25, stdev=1226.49 00:38:23.739 clat percentiles (usec): 00:38:23.739 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:38:23.739 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:38:23.739 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.739 | 99.00th=[31327], 99.50th=[31589], 99.90th=[45876], 99.95th=[45876], 00:38:23.739 | 99.99th=[45876] 00:38:23.739 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2097.90, stdev=75.41, samples=20 00:38:23.739 iops : min= 480, max= 544, avg=524.45, stdev=18.84, samples=20 00:38:23.739 lat (msec) : 20=0.27%, 50=99.73% 00:38:23.739 cpu : usr=98.67%, sys=0.92%, ctx=13, majf=0, minf=9 00:38:23.739 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:23.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 issued rwts: total=5262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.739 filename0: (groupid=0, jobs=1): err= 0: pid=927965: Thu Dec 5 14:10:04 2024 00:38:23.739 read: IOPS=554, BW=2219KiB/s (2273kB/s)(21.7MiB/10006msec) 00:38:23.739 slat (nsec): min=6039, max=80254, avg=20984.71, stdev=15471.52 00:38:23.739 clat (usec): min=1102, max=31585, avg=28669.88, stdev=6457.93 00:38:23.739 lat (usec): min=1111, max=31612, avg=28690.87, stdev=6459.97 00:38:23.739 clat percentiles (usec): 00:38:23.739 | 1.00th=[ 1205], 5.00th=[ 7963], 10.00th=[29754], 20.00th=[30016], 00:38:23.739 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:23.739 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:38:23.739 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:38:23.739 | 99.99th=[31589] 00:38:23.739 bw ( KiB/s): min= 2048, max= 4480, per=4.38%, avg=2214.80, stdev=536.75, samples=20 00:38:23.739 iops : min= 512, max= 1120, avg=553.70, stdev=134.19, samples=20 00:38:23.739 lat (msec) : 2=4.25%, 4=0.36%, 10=0.61%, 20=0.83%, 50=93.95% 00:38:23.739 cpu : usr=98.28%, sys=1.14%, ctx=38, majf=0, minf=9 00:38:23.739 IO depths : 1=5.9%, 2=11.9%, 4=24.1%, 8=51.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:23.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 issued rwts: total=5552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.739 filename0: (groupid=0, jobs=1): err= 0: pid=927966: Thu Dec 5 14:10:04 2024 00:38:23.739 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10007msec) 00:38:23.739 slat (nsec): min=7128, max=79240, avg=35491.68, stdev=16431.43 00:38:23.739 clat (usec): min=8816, max=42000, avg=29959.74, stdev=2245.72 00:38:23.739 lat (usec): min=8847, max=42057, avg=29995.23, stdev=2246.17 00:38:23.739 clat percentiles (usec): 00:38:23.739 | 1.00th=[16712], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:38:23.739 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:38:23.739 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:38:23.739 | 99.00th=[31065], 99.50th=[31327], 99.90th=[41681], 99.95th=[42206], 00:38:23.739 | 99.99th=[42206] 00:38:23.739 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2112.00, stdev=77.69, samples=20 00:38:23.739 iops : min= 512, max= 576, avg=528.00, stdev=19.42, samples=20 00:38:23.739 lat (msec) : 10=0.30%, 20=1.28%, 50=98.41% 00:38:23.739 cpu : usr=98.63%, sys=0.96%, ctx=43, majf=0, minf=9 00:38:23.739 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:23.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.739 filename0: (groupid=0, jobs=1): err= 0: pid=927967: Thu Dec 5 14:10:04 2024 00:38:23.739 read: IOPS=527, BW=2111KiB/s (2162kB/s)(20.6MiB/10004msec) 00:38:23.739 slat (nsec): min=7815, max=73094, avg=25114.62, stdev=12556.21 00:38:23.739 clat (usec): min=11449, max=34094, avg=30118.27, stdev=1493.38 00:38:23.739 lat (usec): min=11463, max=34106, avg=30143.38, stdev=1492.85 00:38:23.739 clat percentiles (usec): 00:38:23.739 | 1.00th=[22414], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:38:23.739 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:23.739 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:38:23.739 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32900], 99.95th=[33162], 00:38:23.739 | 99.99th=[34341] 00:38:23.739 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2108.63, stdev=78.31, samples=19 00:38:23.739 iops : min= 512, max= 576, avg=527.16, stdev=19.58, samples=19 00:38:23.739 lat (msec) : 20=0.85%, 50=99.15% 00:38:23.739 cpu : usr=98.61%, sys=0.91%, ctx=59, majf=0, minf=9 00:38:23.739 IO depths : 1=6.0%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:23.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.739 filename0: (groupid=0, jobs=1): err= 0: pid=927968: Thu Dec 5 14:10:04 2024 00:38:23.739 read: IOPS=527, BW=2111KiB/s (2162kB/s)(20.6MiB/10004msec) 00:38:23.739 slat (nsec): min=7257, max=72077, avg=18918.33, stdev=11554.29 00:38:23.739 clat (usec): min=10945, max=33364, avg=30161.83, stdev=1514.99 00:38:23.739 lat (usec): min=10968, max=33378, avg=30180.75, stdev=1513.42 00:38:23.739 clat percentiles (usec): 00:38:23.739 | 1.00th=[20579], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:38:23.739 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:23.739 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:38:23.739 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32113], 99.95th=[32900], 00:38:23.739 | 99.99th=[33424] 00:38:23.739 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2108.63, stdev=78.31, samples=19 00:38:23.739 iops : min= 512, max= 576, avg=527.16, stdev=19.58, samples=19 00:38:23.739 lat (msec) : 20=0.61%, 50=99.39% 00:38:23.739 cpu : usr=98.72%, sys=0.84%, ctx=79, majf=0, minf=9 00:38:23.739 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:23.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.739 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.739 filename0: (groupid=0, jobs=1): err= 0: pid=927969: Thu Dec 5 14:10:04 2024 00:38:23.739 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10008msec) 00:38:23.739 slat (usec): min=6, max=104, avg=48.56, stdev=22.45 00:38:23.739 clat (usec): min=19937, max=31480, avg=29937.50, stdev=767.77 00:38:23.739 lat (usec): min=19958, max=31536, avg=29986.06, stdev=773.02 00:38:23.739 clat percentiles (usec): 00:38:23.739 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:38:23.739 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:38:23.739 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.740 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31589], 00:38:23.740 | 99.99th=[31589] 00:38:23.740 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2101.89, stdev=64.93, samples=19 00:38:23.740 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:38:23.740 lat (msec) : 20=0.08%, 50=99.92% 00:38:23.740 cpu : usr=98.85%, sys=0.74%, ctx=14, majf=0, minf=9 00:38:23.740 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:23.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.740 filename1: (groupid=0, jobs=1): err= 0: pid=927970: Thu Dec 5 14:10:04 2024 00:38:23.740 read: IOPS=527, BW=2111KiB/s (2161kB/s)(20.6MiB/10007msec) 00:38:23.740 slat (usec): min=8, max=105, avg=50.28, stdev=21.41 00:38:23.740 clat (usec): min=10913, max=31619, avg=29860.52, stdev=1457.55 00:38:23.740 lat (usec): min=10937, max=31645, avg=29910.80, stdev=1459.85 00:38:23.740 clat percentiles (usec): 00:38:23.740 | 1.00th=[21627], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:38:23.740 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:38:23.740 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.740 | 99.00th=[30802], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:38:23.740 | 99.99th=[31589] 00:38:23.740 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2105.60, stdev=65.33, samples=20 00:38:23.740 iops : min= 512, max= 544, avg=526.40, stdev=16.33, samples=20 00:38:23.740 lat (msec) : 20=0.61%, 50=99.39% 00:38:23.740 cpu : usr=98.87%, sys=0.73%, ctx=12, majf=0, minf=9 00:38:23.740 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:23.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.740 filename1: (groupid=0, jobs=1): err= 0: pid=927971: Thu Dec 5 14:10:04 2024 00:38:23.740 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10011msec) 00:38:23.740 slat (nsec): min=5817, max=96185, avg=37923.94, stdev=17200.24 00:38:23.740 clat (usec): min=16486, max=40020, avg=30066.20, stdev=1093.64 00:38:23.740 lat (usec): min=16506, max=40035, avg=30104.12, stdev=1094.47 00:38:23.740 clat percentiles (usec): 00:38:23.740 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:38:23.740 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:38:23.740 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.740 | 99.00th=[31327], 99.50th=[31589], 99.90th=[40109], 99.95th=[40109], 00:38:23.740 | 99.99th=[40109] 00:38:23.740 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2099.20, stdev=64.34, samples=20 00:38:23.740 iops : min= 512, max= 544, avg=524.80, stdev=16.08, samples=20 00:38:23.740 lat (msec) : 20=0.30%, 50=99.70% 00:38:23.740 cpu : usr=98.75%, sys=0.86%, ctx=14, majf=0, minf=9 00:38:23.740 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:23.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.740 filename1: (groupid=0, jobs=1): err= 0: pid=927972: Thu Dec 5 14:10:04 2024 00:38:23.740 read: IOPS=525, BW=2101KiB/s (2152kB/s)(20.5MiB/10005msec) 00:38:23.740 slat (usec): min=4, max=103, avg=39.11, stdev=21.13 00:38:23.740 clat (usec): min=14585, max=63366, avg=30079.10, stdev=2722.05 00:38:23.740 lat (usec): min=14608, max=63379, avg=30118.21, stdev=2723.32 00:38:23.740 clat percentiles (usec): 00:38:23.740 | 1.00th=[20055], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:38:23.740 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:38:23.740 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:38:23.740 | 99.00th=[37487], 99.50th=[48497], 99.90th=[63177], 99.95th=[63177], 00:38:23.740 | 99.99th=[63177] 00:38:23.740 bw ( KiB/s): min= 1923, max= 2176, per=4.13%, avg=2091.95, stdev=70.37, samples=19 00:38:23.740 iops : min= 480, max= 544, avg=522.95, stdev=17.69, samples=19 00:38:23.740 lat (msec) : 20=0.97%, 50=98.73%, 100=0.30% 00:38:23.740 cpu : usr=98.74%, sys=0.86%, ctx=16, majf=0, minf=9 00:38:23.740 IO depths : 1=5.3%, 2=11.0%, 4=23.5%, 8=52.7%, 16=7.4%, 32=0.0%, >=64=0.0% 00:38:23.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 issued rwts: total=5256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.740 filename1: (groupid=0, jobs=1): err= 0: pid=927973: Thu Dec 5 14:10:04 2024 00:38:23.740 read: IOPS=526, BW=2104KiB/s (2155kB/s)(20.6MiB/10007msec) 00:38:23.740 slat (usec): min=8, max=112, avg=48.70, stdev=22.81 00:38:23.740 clat (usec): min=15117, max=36956, avg=29926.01, stdev=817.17 00:38:23.740 lat (usec): min=15127, max=36979, avg=29974.71, stdev=822.64 00:38:23.740 clat percentiles (usec): 00:38:23.740 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:38:23.740 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:38:23.740 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.740 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31589], 00:38:23.740 | 99.99th=[36963] 00:38:23.740 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2101.89, stdev=64.93, samples=19 00:38:23.740 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:38:23.740 lat (msec) : 20=0.17%, 50=99.83% 00:38:23.740 cpu : usr=98.70%, sys=0.90%, ctx=15, majf=0, minf=9 00:38:23.740 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:23.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.740 filename1: (groupid=0, jobs=1): err= 0: pid=927974: Thu Dec 5 14:10:04 2024 00:38:23.740 read: IOPS=524, BW=2098KiB/s (2149kB/s)(20.5MiB/10005msec) 00:38:23.740 slat (usec): min=7, max=104, avg=35.30, stdev=17.30 00:38:23.740 clat (usec): min=16498, max=63835, avg=30150.42, stdev=2085.90 00:38:23.740 lat (usec): min=16509, max=63852, avg=30185.72, stdev=2085.66 00:38:23.740 clat percentiles (usec): 00:38:23.740 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:38:23.740 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:38:23.740 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.740 | 99.00th=[31327], 99.50th=[31589], 99.90th=[63701], 99.95th=[63701], 00:38:23.740 | 99.99th=[63701] 00:38:23.740 bw ( KiB/s): min= 1923, max= 2176, per=4.13%, avg=2088.58, stdev=74.17, samples=19 00:38:23.740 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:38:23.740 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:38:23.740 cpu : usr=98.75%, sys=0.84%, ctx=12, majf=0, minf=9 00:38:23.740 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:23.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.740 filename1: (groupid=0, jobs=1): err= 0: pid=927975: Thu Dec 5 14:10:04 2024 00:38:23.740 read: IOPS=524, BW=2098KiB/s (2149kB/s)(20.5MiB/10005msec) 00:38:23.740 slat (nsec): min=7422, max=96702, avg=36878.59, stdev=17607.23 00:38:23.740 clat (usec): min=16484, max=63821, avg=30134.51, stdev=2086.97 00:38:23.740 lat (usec): min=16504, max=63835, avg=30171.39, stdev=2086.71 00:38:23.740 clat percentiles (usec): 00:38:23.740 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:38:23.740 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:38:23.740 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.740 | 99.00th=[31327], 99.50th=[31589], 99.90th=[63701], 99.95th=[63701], 00:38:23.740 | 99.99th=[63701] 00:38:23.740 bw ( KiB/s): min= 1923, max= 2176, per=4.13%, avg=2088.58, stdev=74.17, samples=19 00:38:23.740 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:38:23.740 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:38:23.740 cpu : usr=98.86%, sys=0.73%, ctx=12, majf=0, minf=9 00:38:23.740 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:23.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.740 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.740 filename1: (groupid=0, jobs=1): err= 0: pid=927976: Thu Dec 5 14:10:04 2024 00:38:23.740 read: IOPS=525, BW=2100KiB/s (2151kB/s)(20.5MiB/10006msec) 00:38:23.740 slat (usec): min=4, max=105, avg=45.66, stdev=23.75 00:38:23.740 clat (usec): min=14560, max=65908, avg=30001.99, stdev=2469.84 00:38:23.740 lat (usec): min=14580, max=65926, avg=30047.64, stdev=2471.55 00:38:23.740 clat percentiles (usec): 00:38:23.740 | 1.00th=[22152], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:38:23.740 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:38:23.740 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.740 | 99.00th=[31327], 99.50th=[33817], 99.90th=[63701], 99.95th=[63701], 00:38:23.740 | 99.99th=[65799] 00:38:23.740 bw ( KiB/s): min= 1968, max= 2176, per=4.13%, avg=2090.95, stdev=69.14, samples=19 00:38:23.740 iops : min= 492, max= 544, avg=522.74, stdev=17.28, samples=19 00:38:23.740 lat (msec) : 20=0.63%, 50=98.97%, 100=0.40% 00:38:23.740 cpu : usr=98.65%, sys=0.94%, ctx=13, majf=0, minf=9 00:38:23.740 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:23.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 issued rwts: total=5254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.741 filename1: (groupid=0, jobs=1): err= 0: pid=927977: Thu Dec 5 14:10:04 2024 00:38:23.741 read: IOPS=525, BW=2102KiB/s (2153kB/s)(20.6MiB/10015msec) 00:38:23.741 slat (nsec): min=6659, max=79082, avg=13936.03, stdev=3354.57 00:38:23.741 clat (usec): min=15676, max=49768, avg=30318.19, stdev=1818.97 00:38:23.741 lat (usec): min=15686, max=49793, avg=30332.13, stdev=1818.52 00:38:23.741 clat percentiles (usec): 00:38:23.741 | 1.00th=[20055], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:38:23.741 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:23.741 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:38:23.741 | 99.00th=[33817], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:38:23.741 | 99.99th=[49546] 00:38:23.741 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2097.25, stdev=60.63, samples=20 00:38:23.741 iops : min= 512, max= 544, avg=524.30, stdev=15.15, samples=20 00:38:23.741 lat (msec) : 20=0.93%, 50=99.07% 00:38:23.741 cpu : usr=98.48%, sys=1.12%, ctx=13, majf=0, minf=9 00:38:23.741 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:23.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.741 filename2: (groupid=0, jobs=1): err= 0: pid=927978: Thu Dec 5 14:10:04 2024 00:38:23.741 read: IOPS=525, BW=2102KiB/s (2152kB/s)(20.6MiB/10017msec) 00:38:23.741 slat (usec): min=7, max=100, avg=37.92, stdev=17.26 00:38:23.741 clat (usec): min=16423, max=45682, avg=30081.65, stdev=1281.05 00:38:23.741 lat (usec): min=16439, max=45697, avg=30119.57, stdev=1281.55 00:38:23.741 clat percentiles (usec): 00:38:23.741 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:38:23.741 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:38:23.741 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.741 | 99.00th=[31327], 99.50th=[31589], 99.90th=[45876], 99.95th=[45876], 00:38:23.741 | 99.99th=[45876] 00:38:23.741 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2097.90, stdev=75.41, samples=20 00:38:23.741 iops : min= 480, max= 544, avg=524.45, stdev=18.84, samples=20 00:38:23.741 lat (msec) : 20=0.30%, 50=99.70% 00:38:23.741 cpu : usr=98.62%, sys=0.97%, ctx=10, majf=0, minf=9 00:38:23.741 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:23.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.741 filename2: (groupid=0, jobs=1): err= 0: pid=927979: Thu Dec 5 14:10:04 2024 00:38:23.741 read: IOPS=525, BW=2103KiB/s (2153kB/s)(20.5MiB/10005msec) 00:38:23.741 slat (usec): min=7, max=109, avg=46.27, stdev=23.61 00:38:23.741 clat (usec): min=9594, max=65335, avg=29963.14, stdev=2627.94 00:38:23.741 lat (usec): min=9606, max=65348, avg=30009.41, stdev=2629.86 00:38:23.741 clat percentiles (usec): 00:38:23.741 | 1.00th=[20841], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:38:23.741 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:38:23.741 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.741 | 99.00th=[31327], 99.50th=[44303], 99.90th=[65274], 99.95th=[65274], 00:38:23.741 | 99.99th=[65274] 00:38:23.741 bw ( KiB/s): min= 2020, max= 2176, per=4.14%, avg=2093.68, stdev=64.90, samples=19 00:38:23.741 iops : min= 505, max= 544, avg=523.42, stdev=16.23, samples=19 00:38:23.741 lat (msec) : 10=0.08%, 20=0.61%, 50=98.82%, 100=0.49% 00:38:23.741 cpu : usr=98.80%, sys=0.79%, ctx=8, majf=0, minf=9 00:38:23.741 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:23.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 issued rwts: total=5260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.741 filename2: (groupid=0, jobs=1): err= 0: pid=927980: Thu Dec 5 14:10:04 2024 00:38:23.741 read: IOPS=526, BW=2104KiB/s (2155kB/s)(20.6MiB/10017msec) 00:38:23.741 slat (nsec): min=7592, max=96612, avg=14852.15, stdev=6296.12 00:38:23.741 clat (usec): min=16669, max=50343, avg=30274.44, stdev=1947.81 00:38:23.741 lat (usec): min=16678, max=50424, avg=30289.30, stdev=1948.12 00:38:23.741 clat percentiles (usec): 00:38:23.741 | 1.00th=[20055], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:38:23.741 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:23.741 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:38:23.741 | 99.00th=[31851], 99.50th=[43779], 99.90th=[50070], 99.95th=[50070], 00:38:23.741 | 99.99th=[50594] 00:38:23.741 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2099.45, stdev=59.69, samples=20 00:38:23.741 iops : min= 512, max= 544, avg=524.85, stdev=14.91, samples=20 00:38:23.741 lat (msec) : 20=1.06%, 50=98.79%, 100=0.15% 00:38:23.741 cpu : usr=98.68%, sys=0.92%, ctx=10, majf=0, minf=9 00:38:23.741 IO depths : 1=5.9%, 2=12.0%, 4=24.6%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:38:23.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 issued rwts: total=5270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.741 filename2: (groupid=0, jobs=1): err= 0: pid=927981: Thu Dec 5 14:10:04 2024 00:38:23.741 read: IOPS=527, BW=2111KiB/s (2161kB/s)(20.6MiB/10007msec) 00:38:23.741 slat (usec): min=7, max=104, avg=49.02, stdev=21.97 00:38:23.741 clat (usec): min=11976, max=41667, avg=29850.51, stdev=1488.94 00:38:23.741 lat (usec): min=11998, max=41698, avg=29899.52, stdev=1491.50 00:38:23.741 clat percentiles (usec): 00:38:23.741 | 1.00th=[20841], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:38:23.741 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:38:23.741 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.741 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31589], 99.95th=[41157], 00:38:23.741 | 99.99th=[41681] 00:38:23.741 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2105.60, stdev=65.33, samples=20 00:38:23.741 iops : min= 512, max= 544, avg=526.40, stdev=16.33, samples=20 00:38:23.741 lat (msec) : 20=0.68%, 50=99.32% 00:38:23.741 cpu : usr=98.84%, sys=0.76%, ctx=13, majf=0, minf=9 00:38:23.741 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:23.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.741 filename2: (groupid=0, jobs=1): err= 0: pid=927982: Thu Dec 5 14:10:04 2024 00:38:23.741 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10006msec) 00:38:23.741 slat (nsec): min=5113, max=96964, avg=37058.68, stdev=17234.95 00:38:23.741 clat (usec): min=16606, max=63969, avg=30141.49, stdev=2089.15 00:38:23.741 lat (usec): min=16625, max=63988, avg=30178.55, stdev=2088.71 00:38:23.741 clat percentiles (usec): 00:38:23.741 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:38:23.741 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:38:23.741 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.741 | 99.00th=[31327], 99.50th=[31589], 99.90th=[63701], 99.95th=[63701], 00:38:23.741 | 99.99th=[64226] 00:38:23.741 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2088.42, stdev=74.55, samples=19 00:38:23.741 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:38:23.741 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:38:23.741 cpu : usr=98.61%, sys=0.98%, ctx=14, majf=0, minf=9 00:38:23.741 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:23.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.741 filename2: (groupid=0, jobs=1): err= 0: pid=927983: Thu Dec 5 14:10:04 2024 00:38:23.741 read: IOPS=526, BW=2105KiB/s (2156kB/s)(20.6MiB/10002msec) 00:38:23.741 slat (usec): min=8, max=108, avg=48.50, stdev=22.77 00:38:23.741 clat (usec): min=16619, max=31483, avg=29912.79, stdev=917.55 00:38:23.741 lat (usec): min=16633, max=31532, avg=29961.29, stdev=922.37 00:38:23.741 clat percentiles (usec): 00:38:23.741 | 1.00th=[28705], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:38:23.741 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:38:23.741 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.741 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:38:23.741 | 99.99th=[31589] 00:38:23.741 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2101.89, stdev=64.93, samples=19 00:38:23.741 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:38:23.741 lat (msec) : 20=0.30%, 50=99.70% 00:38:23.741 cpu : usr=98.73%, sys=0.86%, ctx=13, majf=0, minf=9 00:38:23.741 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:23.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.741 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.741 filename2: (groupid=0, jobs=1): err= 0: pid=927984: Thu Dec 5 14:10:04 2024 00:38:23.741 read: IOPS=527, BW=2111KiB/s (2161kB/s)(20.6MiB/10007msec) 00:38:23.741 slat (usec): min=8, max=100, avg=40.55, stdev=16.79 00:38:23.741 clat (usec): min=11830, max=31541, avg=29983.04, stdev=1442.96 00:38:23.741 lat (usec): min=11857, max=31560, avg=30023.59, stdev=1442.45 00:38:23.741 clat percentiles (usec): 00:38:23.741 | 1.00th=[21627], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:38:23.741 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:38:23.741 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:38:23.742 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[31589], 00:38:23.742 | 99.99th=[31589] 00:38:23.742 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2105.60, stdev=65.33, samples=20 00:38:23.742 iops : min= 512, max= 544, avg=526.40, stdev=16.33, samples=20 00:38:23.742 lat (msec) : 20=0.61%, 50=99.39% 00:38:23.742 cpu : usr=98.70%, sys=0.87%, ctx=78, majf=0, minf=9 00:38:23.742 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:23.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.742 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.742 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.742 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.742 filename2: (groupid=0, jobs=1): err= 0: pid=927985: Thu Dec 5 14:10:04 2024 00:38:23.742 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10012msec) 00:38:23.742 slat (nsec): min=5543, max=97553, avg=37858.15, stdev=17215.74 00:38:23.742 clat (usec): min=16511, max=40222, avg=30064.08, stdev=1098.07 00:38:23.742 lat (usec): min=16530, max=40245, avg=30101.94, stdev=1098.99 00:38:23.742 clat percentiles (usec): 00:38:23.742 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:38:23.742 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:38:23.742 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:38:23.742 | 99.00th=[31327], 99.50th=[31589], 99.90th=[40109], 99.95th=[40109], 00:38:23.742 | 99.99th=[40109] 00:38:23.742 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2099.20, stdev=64.34, samples=20 00:38:23.742 iops : min= 512, max= 544, avg=524.80, stdev=16.08, samples=20 00:38:23.742 lat (msec) : 20=0.30%, 50=99.70% 00:38:23.742 cpu : usr=98.61%, sys=0.98%, ctx=13, majf=0, minf=9 00:38:23.742 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:23.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.742 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.742 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.742 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:23.742 00:38:23.742 Run status group 0 (all jobs): 00:38:23.742 READ: bw=49.4MiB/s (51.8MB/s), 2098KiB/s-2219KiB/s (2148kB/s-2273kB/s), io=495MiB (519MB), run=10002-10018msec 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 bdev_null0 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 [2024-12-05 14:10:04.751530] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 bdev_null1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:23.742 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:23.742 { 00:38:23.742 "params": { 00:38:23.742 "name": "Nvme$subsystem", 00:38:23.742 "trtype": "$TEST_TRANSPORT", 00:38:23.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:23.742 "adrfam": "ipv4", 00:38:23.742 "trsvcid": "$NVMF_PORT", 00:38:23.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:23.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:23.743 "hdgst": ${hdgst:-false}, 00:38:23.743 "ddgst": ${ddgst:-false} 00:38:23.743 }, 00:38:23.743 "method": "bdev_nvme_attach_controller" 00:38:23.743 } 00:38:23.743 EOF 00:38:23.743 )") 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:23.743 { 00:38:23.743 "params": { 00:38:23.743 "name": "Nvme$subsystem", 00:38:23.743 "trtype": "$TEST_TRANSPORT", 00:38:23.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:23.743 "adrfam": "ipv4", 00:38:23.743 "trsvcid": "$NVMF_PORT", 00:38:23.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:23.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:23.743 "hdgst": ${hdgst:-false}, 00:38:23.743 "ddgst": ${ddgst:-false} 00:38:23.743 }, 00:38:23.743 "method": "bdev_nvme_attach_controller" 00:38:23.743 } 00:38:23.743 EOF 00:38:23.743 )") 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:23.743 "params": { 00:38:23.743 "name": "Nvme0", 00:38:23.743 "trtype": "tcp", 00:38:23.743 "traddr": "10.0.0.2", 00:38:23.743 "adrfam": "ipv4", 00:38:23.743 "trsvcid": "4420", 00:38:23.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:23.743 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:23.743 "hdgst": false, 00:38:23.743 "ddgst": false 00:38:23.743 }, 00:38:23.743 "method": "bdev_nvme_attach_controller" 00:38:23.743 },{ 00:38:23.743 "params": { 00:38:23.743 "name": "Nvme1", 00:38:23.743 "trtype": "tcp", 00:38:23.743 "traddr": "10.0.0.2", 00:38:23.743 "adrfam": "ipv4", 00:38:23.743 "trsvcid": "4420", 00:38:23.743 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:23.743 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:23.743 "hdgst": false, 00:38:23.743 "ddgst": false 00:38:23.743 }, 00:38:23.743 "method": "bdev_nvme_attach_controller" 00:38:23.743 }' 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:23.743 14:10:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:23.743 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:23.743 ... 00:38:23.743 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:23.743 ... 00:38:23.743 fio-3.35 00:38:23.743 Starting 4 threads 00:38:29.009 00:38:29.009 filename0: (groupid=0, jobs=1): err= 0: pid=929942: Thu Dec 5 14:10:10 2024 00:38:29.009 read: IOPS=2876, BW=22.5MiB/s (23.6MB/s)(112MiB/5002msec) 00:38:29.009 slat (nsec): min=6104, max=53066, avg=8504.30, stdev=2919.39 00:38:29.009 clat (usec): min=791, max=5461, avg=2754.50, stdev=405.54 00:38:29.009 lat (usec): min=807, max=5468, avg=2763.01, stdev=405.27 00:38:29.009 clat percentiles (usec): 00:38:29.009 | 1.00th=[ 1713], 5.00th=[ 2073], 10.00th=[ 2245], 20.00th=[ 2442], 00:38:29.009 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2835], 60.00th=[ 2933], 00:38:29.009 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3130], 95.00th=[ 3294], 00:38:29.009 | 99.00th=[ 3949], 99.50th=[ 4113], 99.90th=[ 4752], 99.95th=[ 4948], 00:38:29.009 | 99.99th=[ 5080] 00:38:29.009 bw ( KiB/s): min=21648, max=24848, per=26.84%, avg=22872.89, stdev=1148.36, samples=9 00:38:29.009 iops : min= 2706, max= 3106, avg=2859.11, stdev=143.54, samples=9 00:38:29.009 lat (usec) : 1000=0.08% 00:38:29.009 lat (msec) : 2=3.23%, 4=95.84%, 10=0.85% 00:38:29.009 cpu : usr=95.48%, sys=4.22%, ctx=8, majf=0, minf=9 00:38:29.009 IO depths : 1=0.3%, 2=6.3%, 4=66.2%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:29.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.009 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.009 issued rwts: total=14389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:29.009 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:29.009 filename0: (groupid=0, jobs=1): err= 0: pid=929943: Thu Dec 5 14:10:10 2024 00:38:29.009 read: IOPS=2573, BW=20.1MiB/s (21.1MB/s)(101MiB/5002msec) 00:38:29.009 slat (nsec): min=6099, max=38619, avg=8568.31, stdev=2883.87 00:38:29.009 clat (usec): min=1031, max=5438, avg=3083.05, stdev=434.68 00:38:29.009 lat (usec): min=1041, max=5449, avg=3091.62, stdev=434.53 00:38:29.009 clat percentiles (usec): 00:38:29.009 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2900], 00:38:29.009 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:38:29.009 | 70.00th=[ 3064], 80.00th=[ 3261], 90.00th=[ 3621], 95.00th=[ 3916], 00:38:29.009 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5145], 99.95th=[ 5211], 00:38:29.009 | 99.99th=[ 5342] 00:38:29.009 bw ( KiB/s): min=19984, max=21216, per=24.28%, avg=20688.00, stdev=457.05, samples=9 00:38:29.009 iops : min= 2498, max= 2652, avg=2586.00, stdev=57.13, samples=9 00:38:29.009 lat (msec) : 2=0.40%, 4=94.93%, 10=4.67% 00:38:29.009 cpu : usr=95.50%, sys=4.18%, ctx=6, majf=0, minf=9 00:38:29.009 IO depths : 1=0.1%, 2=2.7%, 4=68.7%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:29.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.009 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.009 issued rwts: total=12875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:29.009 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:29.009 filename1: (groupid=0, jobs=1): err= 0: pid=929944: Thu Dec 5 14:10:10 2024 00:38:29.009 read: IOPS=2629, BW=20.5MiB/s (21.5MB/s)(103MiB/5002msec) 00:38:29.009 slat (nsec): min=6087, max=38835, avg=8664.58, stdev=3049.08 00:38:29.009 clat (usec): min=828, max=5287, avg=3017.07, stdev=378.72 00:38:29.009 lat (usec): min=839, max=5314, avg=3025.73, stdev=378.65 00:38:29.009 clat percentiles (usec): 00:38:29.009 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2802], 00:38:29.009 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:38:29.009 | 70.00th=[ 3032], 80.00th=[ 3228], 90.00th=[ 3490], 95.00th=[ 3687], 00:38:29.009 | 99.00th=[ 4293], 99.50th=[ 4555], 99.90th=[ 5014], 99.95th=[ 5211], 00:38:29.009 | 99.99th=[ 5276] 00:38:29.009 bw ( KiB/s): min=20400, max=21920, per=24.70%, avg=21048.00, stdev=479.96, samples=10 00:38:29.009 iops : min= 2550, max= 2740, avg=2631.00, stdev=59.99, samples=10 00:38:29.009 lat (usec) : 1000=0.01% 00:38:29.009 lat (msec) : 2=0.54%, 4=97.47%, 10=1.98% 00:38:29.009 cpu : usr=95.62%, sys=4.06%, ctx=8, majf=0, minf=9 00:38:29.009 IO depths : 1=0.1%, 2=1.5%, 4=70.0%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:29.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.009 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.009 issued rwts: total=13155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:29.009 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:29.009 filename1: (groupid=0, jobs=1): err= 0: pid=929945: Thu Dec 5 14:10:10 2024 00:38:29.009 read: IOPS=2572, BW=20.1MiB/s (21.1MB/s)(101MiB/5001msec) 00:38:29.009 slat (nsec): min=6079, max=43220, avg=8418.97, stdev=2902.91 00:38:29.009 clat (usec): min=1033, max=5775, avg=3084.72, stdev=401.62 00:38:29.009 lat (usec): min=1040, max=5799, avg=3093.14, stdev=401.41 00:38:29.009 clat percentiles (usec): 00:38:29.009 | 1.00th=[ 2212], 5.00th=[ 2573], 10.00th=[ 2769], 20.00th=[ 2900], 00:38:29.009 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:38:29.009 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3589], 95.00th=[ 3785], 00:38:29.009 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[ 5473], 00:38:29.009 | 99.99th=[ 5538] 00:38:29.009 bw ( KiB/s): min=19168, max=21312, per=24.12%, avg=20555.44, stdev=744.82, samples=9 00:38:29.009 iops : min= 2396, max= 2664, avg=2569.33, stdev=93.19, samples=9 00:38:29.009 lat (msec) : 2=0.37%, 4=96.46%, 10=3.16% 00:38:29.009 cpu : usr=96.16%, sys=3.52%, ctx=6, majf=0, minf=9 00:38:29.009 IO depths : 1=0.1%, 2=1.4%, 4=72.0%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:29.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.009 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:29.009 issued rwts: total=12865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:29.009 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:29.009 00:38:29.009 Run status group 0 (all jobs): 00:38:29.009 READ: bw=83.2MiB/s (87.3MB/s), 20.1MiB/s-22.5MiB/s (21.1MB/s-23.6MB/s), io=416MiB (437MB), run=5001-5002msec 00:38:29.009 14:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:29.009 14:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:29.009 14:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:29.009 14:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:29.009 14:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:29.009 14:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:29.009 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.009 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:29.009 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.010 00:38:29.010 real 0m24.328s 00:38:29.010 user 4m51.856s 00:38:29.010 sys 0m4.756s 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:29.010 14:10:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:29.010 ************************************ 00:38:29.010 END TEST fio_dif_rand_params 00:38:29.010 ************************************ 00:38:29.010 14:10:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:29.010 14:10:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:29.010 14:10:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:29.010 14:10:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:29.010 ************************************ 00:38:29.010 START TEST fio_dif_digest 00:38:29.010 ************************************ 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:29.010 bdev_null0 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:29.010 [2024-12-05 14:10:11.233551] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:29.010 { 00:38:29.010 "params": { 00:38:29.010 "name": "Nvme$subsystem", 00:38:29.010 "trtype": "$TEST_TRANSPORT", 00:38:29.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:29.010 "adrfam": "ipv4", 00:38:29.010 "trsvcid": "$NVMF_PORT", 00:38:29.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:29.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:29.010 "hdgst": ${hdgst:-false}, 00:38:29.010 "ddgst": ${ddgst:-false} 00:38:29.010 }, 00:38:29.010 "method": "bdev_nvme_attach_controller" 00:38:29.010 } 00:38:29.010 EOF 00:38:29.010 )") 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:38:29.010 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:29.011 "params": { 00:38:29.011 "name": "Nvme0", 00:38:29.011 "trtype": "tcp", 00:38:29.011 "traddr": "10.0.0.2", 00:38:29.011 "adrfam": "ipv4", 00:38:29.011 "trsvcid": "4420", 00:38:29.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:29.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:29.011 "hdgst": true, 00:38:29.011 "ddgst": true 00:38:29.011 }, 00:38:29.011 "method": "bdev_nvme_attach_controller" 00:38:29.011 }' 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:29.011 14:10:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:29.268 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:29.268 ... 00:38:29.268 fio-3.35 00:38:29.268 Starting 3 threads 00:38:41.491 00:38:41.491 filename0: (groupid=0, jobs=1): err= 0: pid=931001: Thu Dec 5 14:10:22 2024 00:38:41.491 read: IOPS=287, BW=35.9MiB/s (37.7MB/s)(361MiB/10044msec) 00:38:41.491 slat (nsec): min=6377, max=26833, avg=11578.60, stdev=1820.88 00:38:41.491 clat (usec): min=7662, max=49863, avg=10409.21, stdev=1252.18 00:38:41.491 lat (usec): min=7674, max=49876, avg=10420.79, stdev=1252.18 00:38:41.491 clat percentiles (usec): 00:38:41.491 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9765], 00:38:41.491 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:38:41.491 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:38:41.491 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13960], 99.95th=[47449], 00:38:41.491 | 99.99th=[50070] 00:38:41.491 bw ( KiB/s): min=35840, max=37888, per=35.14%, avg=36924.30, stdev=694.84, samples=20 00:38:41.491 iops : min= 280, max= 296, avg=288.45, stdev= 5.43, samples=20 00:38:41.491 lat (msec) : 10=29.44%, 20=70.49%, 50=0.07% 00:38:41.491 cpu : usr=94.60%, sys=5.10%, ctx=17, majf=0, minf=78 00:38:41.491 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:41.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.491 issued rwts: total=2887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:41.491 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:41.491 filename0: (groupid=0, jobs=1): err= 0: pid=931002: Thu Dec 5 14:10:22 2024 00:38:41.491 read: IOPS=269, BW=33.7MiB/s (35.4MB/s)(339MiB/10045msec) 00:38:41.491 slat (nsec): min=6468, max=26077, avg=11400.04, stdev=1673.75 00:38:41.491 clat (usec): min=7477, max=45948, avg=11086.73, stdev=1200.74 00:38:41.491 lat (usec): min=7488, max=45960, avg=11098.13, stdev=1200.73 00:38:41.491 clat percentiles (usec): 00:38:41.491 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10421], 00:38:41.491 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:38:41.491 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:38:41.491 | 99.00th=[13042], 99.50th=[13304], 99.90th=[15270], 99.95th=[45351], 00:38:41.491 | 99.99th=[45876] 00:38:41.491 bw ( KiB/s): min=33792, max=36096, per=33.00%, avg=34675.20, stdev=640.54, samples=20 00:38:41.491 iops : min= 264, max= 282, avg=270.90, stdev= 5.00, samples=20 00:38:41.491 lat (msec) : 10=7.27%, 20=92.66%, 50=0.07% 00:38:41.491 cpu : usr=94.78%, sys=4.92%, ctx=20, majf=0, minf=48 00:38:41.491 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:41.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.491 issued rwts: total=2711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:41.491 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:41.491 filename0: (groupid=0, jobs=1): err= 0: pid=931003: Thu Dec 5 14:10:22 2024 00:38:41.491 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(331MiB/10045msec) 00:38:41.491 slat (nsec): min=6309, max=26694, avg=11319.93, stdev=1733.26 00:38:41.491 clat (usec): min=8881, max=49569, avg=11355.20, stdev=1246.35 00:38:41.491 lat (usec): min=8893, max=49579, avg=11366.52, stdev=1246.34 00:38:41.491 clat percentiles (usec): 00:38:41.491 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:38:41.491 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:38:41.491 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:38:41.491 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13829], 99.95th=[45351], 00:38:41.491 | 99.99th=[49546] 00:38:41.491 bw ( KiB/s): min=32768, max=34816, per=32.22%, avg=33856.00, stdev=609.64, samples=20 00:38:41.491 iops : min= 256, max= 272, avg=264.50, stdev= 4.76, samples=20 00:38:41.491 lat (msec) : 10=4.00%, 20=95.92%, 50=0.08% 00:38:41.491 cpu : usr=94.37%, sys=5.33%, ctx=17, majf=0, minf=70 00:38:41.491 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:41.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.491 issued rwts: total=2647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:41.491 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:41.491 00:38:41.491 Run status group 0 (all jobs): 00:38:41.491 READ: bw=103MiB/s (108MB/s), 32.9MiB/s-35.9MiB/s (34.5MB/s-37.7MB/s), io=1031MiB (1081MB), run=10044-10045msec 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.491 00:38:41.491 real 0m11.140s 00:38:41.491 user 0m35.170s 00:38:41.491 sys 0m1.861s 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:41.491 14:10:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:41.491 ************************************ 00:38:41.491 END TEST fio_dif_digest 00:38:41.491 ************************************ 00:38:41.491 14:10:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:41.491 14:10:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:41.491 14:10:22 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:41.491 14:10:22 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:38:41.491 14:10:22 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:41.491 14:10:22 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:38:41.491 14:10:22 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:41.491 14:10:22 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:41.491 rmmod nvme_tcp 00:38:41.491 rmmod nvme_fabrics 00:38:41.491 rmmod nvme_keyring 00:38:41.491 14:10:22 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:41.491 14:10:22 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:38:41.491 14:10:22 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:38:41.491 14:10:22 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 922614 ']' 00:38:41.491 14:10:22 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 922614 00:38:41.491 14:10:22 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 922614 ']' 00:38:41.491 14:10:22 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 922614 00:38:41.491 14:10:22 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:38:41.491 14:10:22 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:41.491 14:10:22 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 922614 00:38:41.491 14:10:22 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:41.492 14:10:22 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:41.492 14:10:22 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 922614' 00:38:41.492 killing process with pid 922614 00:38:41.492 14:10:22 nvmf_dif -- common/autotest_common.sh@973 -- # kill 922614 00:38:41.492 14:10:22 nvmf_dif -- common/autotest_common.sh@978 -- # wait 922614 00:38:41.492 14:10:22 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:41.492 14:10:22 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:42.866 Waiting for block devices as requested 00:38:42.866 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:43.125 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:43.125 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:43.125 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:43.125 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:43.383 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:43.383 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:43.383 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:43.642 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:43.642 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:43.642 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:43.901 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:43.901 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:43.901 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:43.901 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:44.160 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:44.160 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:44.160 14:10:26 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:44.160 14:10:26 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:44.160 14:10:26 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:44.160 14:10:26 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:38:44.160 14:10:26 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:44.160 14:10:26 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:38:44.160 14:10:26 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:44.160 14:10:26 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:44.160 14:10:26 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:44.160 14:10:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:44.160 14:10:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.713 14:10:28 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:46.713 00:38:46.713 real 1m13.904s 00:38:46.713 user 7m9.336s 00:38:46.713 sys 0m20.464s 00:38:46.713 14:10:28 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:46.714 14:10:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:46.714 ************************************ 00:38:46.714 END TEST nvmf_dif 00:38:46.714 ************************************ 00:38:46.714 14:10:28 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:46.714 14:10:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:46.714 14:10:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:46.714 14:10:28 -- common/autotest_common.sh@10 -- # set +x 00:38:46.714 ************************************ 00:38:46.714 START TEST nvmf_abort_qd_sizes 00:38:46.714 ************************************ 00:38:46.714 14:10:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:46.714 * Looking for test storage... 00:38:46.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:46.714 14:10:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:46.714 14:10:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:38:46.714 14:10:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:46.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.714 --rc genhtml_branch_coverage=1 00:38:46.714 --rc genhtml_function_coverage=1 00:38:46.714 --rc genhtml_legend=1 00:38:46.714 --rc geninfo_all_blocks=1 00:38:46.714 --rc geninfo_unexecuted_blocks=1 00:38:46.714 00:38:46.714 ' 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:46.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.714 --rc genhtml_branch_coverage=1 00:38:46.714 --rc genhtml_function_coverage=1 00:38:46.714 --rc genhtml_legend=1 00:38:46.714 --rc geninfo_all_blocks=1 00:38:46.714 --rc geninfo_unexecuted_blocks=1 00:38:46.714 00:38:46.714 ' 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:46.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.714 --rc genhtml_branch_coverage=1 00:38:46.714 --rc genhtml_function_coverage=1 00:38:46.714 --rc genhtml_legend=1 00:38:46.714 --rc geninfo_all_blocks=1 00:38:46.714 --rc geninfo_unexecuted_blocks=1 00:38:46.714 00:38:46.714 ' 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:46.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.714 --rc genhtml_branch_coverage=1 00:38:46.714 --rc genhtml_function_coverage=1 00:38:46.714 --rc genhtml_legend=1 00:38:46.714 --rc geninfo_all_blocks=1 00:38:46.714 --rc geninfo_unexecuted_blocks=1 00:38:46.714 00:38:46.714 ' 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:46.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:46.714 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:46.715 14:10:29 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:38:46.715 14:10:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:53.283 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:53.284 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:53.284 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:53.284 Found net devices under 0000:86:00.0: cvl_0_0 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:53.284 Found net devices under 0000:86:00.1: cvl_0_1 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:53.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:53.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:38:53.284 00:38:53.284 --- 10.0.0.2 ping statistics --- 00:38:53.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:53.284 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:53.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:53.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:38:53.284 00:38:53.284 --- 10.0.0.1 ping statistics --- 00:38:53.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:53.284 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:53.284 14:10:34 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:55.350 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:55.350 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:56.727 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=939019 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 939019 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 939019 ']' 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:56.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:56.985 14:10:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:56.985 [2024-12-05 14:10:39.451123] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:38:56.985 [2024-12-05 14:10:39.451165] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:56.985 [2024-12-05 14:10:39.529942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:57.242 [2024-12-05 14:10:39.576692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:57.242 [2024-12-05 14:10:39.576728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:57.242 [2024-12-05 14:10:39.576736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:57.242 [2024-12-05 14:10:39.576741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:57.242 [2024-12-05 14:10:39.576747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:57.242 [2024-12-05 14:10:39.578168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:57.242 [2024-12-05 14:10:39.578196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:57.242 [2024-12-05 14:10:39.578296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:57.242 [2024-12-05 14:10:39.578297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:57.805 14:10:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:57.805 ************************************ 00:38:57.805 START TEST spdk_target_abort 00:38:57.805 ************************************ 00:38:57.805 14:10:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:38:57.805 14:10:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:57.805 14:10:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:38:57.805 14:10:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.805 14:10:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:01.099 spdk_targetn1 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:01.099 [2024-12-05 14:10:43.204599] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:01.099 [2024-12-05 14:10:43.249109] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:01.099 14:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:04.369 Initializing NVMe Controllers 00:39:04.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:04.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:04.369 Initialization complete. Launching workers. 00:39:04.369 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17023, failed: 0 00:39:04.369 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1284, failed to submit 15739 00:39:04.369 success 755, unsuccessful 529, failed 0 00:39:04.369 14:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:04.369 14:10:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:07.648 Initializing NVMe Controllers 00:39:07.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:07.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:07.648 Initialization complete. Launching workers. 00:39:07.648 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8604, failed: 0 00:39:07.648 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1234, failed to submit 7370 00:39:07.648 success 331, unsuccessful 903, failed 0 00:39:07.648 14:10:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:07.648 14:10:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:10.929 Initializing NVMe Controllers 00:39:10.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:10.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:10.929 Initialization complete. Launching workers. 00:39:10.929 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38772, failed: 0 00:39:10.929 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2830, failed to submit 35942 00:39:10.929 success 593, unsuccessful 2237, failed 0 00:39:10.929 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:10.929 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.929 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:10.929 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:10.929 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:10.929 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:10.929 14:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.299 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.299 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 939019 00:39:12.299 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 939019 ']' 00:39:12.299 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 939019 00:39:12.299 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:39:12.299 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:12.299 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 939019 00:39:12.556 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:12.556 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:12.556 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 939019' 00:39:12.556 killing process with pid 939019 00:39:12.556 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 939019 00:39:12.556 14:10:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 939019 00:39:12.556 00:39:12.556 real 0m14.696s 00:39:12.556 user 0m58.483s 00:39:12.556 sys 0m2.674s 00:39:12.556 14:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:12.556 14:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.556 ************************************ 00:39:12.556 END TEST spdk_target_abort 00:39:12.556 ************************************ 00:39:12.556 14:10:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:12.556 14:10:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:12.557 14:10:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:12.557 14:10:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:12.815 ************************************ 00:39:12.815 START TEST kernel_target_abort 00:39:12.815 ************************************ 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:12.815 14:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:15.344 Waiting for block devices as requested 00:39:15.344 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:39:15.603 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:15.603 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:15.603 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:15.862 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:15.862 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:15.862 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:16.121 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:16.121 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:16.121 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:16.121 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:16.380 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:16.380 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:16.380 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:16.639 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:16.639 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:16.639 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:16.898 No valid GPT data, bailing 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:16.898 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:39:16.898 00:39:16.898 Discovery Log Number of Records 2, Generation counter 2 00:39:16.898 =====Discovery Log Entry 0====== 00:39:16.898 trtype: tcp 00:39:16.898 adrfam: ipv4 00:39:16.898 subtype: current discovery subsystem 00:39:16.898 treq: not specified, sq flow control disable supported 00:39:16.898 portid: 1 00:39:16.898 trsvcid: 4420 00:39:16.898 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:16.898 traddr: 10.0.0.1 00:39:16.898 eflags: none 00:39:16.898 sectype: none 00:39:16.898 =====Discovery Log Entry 1====== 00:39:16.898 trtype: tcp 00:39:16.898 adrfam: ipv4 00:39:16.898 subtype: nvme subsystem 00:39:16.898 treq: not specified, sq flow control disable supported 00:39:16.898 portid: 1 00:39:16.898 trsvcid: 4420 00:39:16.898 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:16.898 traddr: 10.0.0.1 00:39:16.899 eflags: none 00:39:16.899 sectype: none 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:16.899 14:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:20.183 Initializing NVMe Controllers 00:39:20.183 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:20.183 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:20.183 Initialization complete. Launching workers. 00:39:20.183 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94873, failed: 0 00:39:20.183 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94873, failed to submit 0 00:39:20.183 success 0, unsuccessful 94873, failed 0 00:39:20.183 14:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:20.183 14:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:23.469 Initializing NVMe Controllers 00:39:23.469 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:23.469 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:23.469 Initialization complete. Launching workers. 00:39:23.469 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 149956, failed: 0 00:39:23.469 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37862, failed to submit 112094 00:39:23.469 success 0, unsuccessful 37862, failed 0 00:39:23.469 14:11:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:23.469 14:11:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:26.749 Initializing NVMe Controllers 00:39:26.749 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:26.749 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:26.749 Initialization complete. Launching workers. 00:39:26.749 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142047, failed: 0 00:39:26.749 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35590, failed to submit 106457 00:39:26.749 success 0, unsuccessful 35590, failed 0 00:39:26.749 14:11:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:26.749 14:11:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:26.749 14:11:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:39:26.749 14:11:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:26.749 14:11:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:26.749 14:11:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:26.749 14:11:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:26.749 14:11:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:39:26.749 14:11:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:39:26.749 14:11:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:29.279 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:29.279 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:30.656 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:39:30.656 00:39:30.656 real 0m18.042s 00:39:30.656 user 0m9.102s 00:39:30.656 sys 0m5.058s 00:39:30.656 14:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:30.656 14:11:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:30.656 ************************************ 00:39:30.656 END TEST kernel_target_abort 00:39:30.656 ************************************ 00:39:30.656 14:11:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:30.656 14:11:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:30.656 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:30.656 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:30.656 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:30.656 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:30.656 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:30.656 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:30.656 rmmod nvme_tcp 00:39:30.916 rmmod nvme_fabrics 00:39:30.916 rmmod nvme_keyring 00:39:30.916 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:30.916 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:30.916 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:30.916 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 939019 ']' 00:39:30.916 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 939019 00:39:30.916 14:11:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 939019 ']' 00:39:30.916 14:11:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 939019 00:39:30.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (939019) - No such process 00:39:30.916 14:11:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 939019 is not found' 00:39:30.916 Process with pid 939019 is not found 00:39:30.916 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:30.916 14:11:13 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:33.451 Waiting for block devices as requested 00:39:33.451 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:39:33.709 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:33.709 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:33.968 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:33.968 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:33.968 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:33.968 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:34.227 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:34.227 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:34.227 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:34.487 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:34.487 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:34.487 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:34.487 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:34.747 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:34.747 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:34.747 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:35.006 14:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:35.006 14:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:35.006 14:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:39:35.006 14:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:39:35.006 14:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:35.006 14:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:39:35.006 14:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:35.006 14:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:35.006 14:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.006 14:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:35.006 14:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.913 14:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:36.913 00:39:36.913 real 0m50.581s 00:39:36.913 user 1m12.157s 00:39:36.913 sys 0m16.404s 00:39:36.913 14:11:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:36.913 14:11:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:36.913 ************************************ 00:39:36.913 END TEST nvmf_abort_qd_sizes 00:39:36.913 ************************************ 00:39:36.913 14:11:19 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:36.913 14:11:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:36.913 14:11:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:36.913 14:11:19 -- common/autotest_common.sh@10 -- # set +x 00:39:37.172 ************************************ 00:39:37.172 START TEST keyring_file 00:39:37.172 ************************************ 00:39:37.172 14:11:19 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:37.172 * Looking for test storage... 00:39:37.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:37.172 14:11:19 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:37.172 14:11:19 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:39:37.172 14:11:19 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:37.172 14:11:19 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:37.172 14:11:19 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.172 14:11:19 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.172 14:11:19 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.172 14:11:19 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.172 14:11:19 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.172 14:11:19 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.172 14:11:19 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.172 14:11:19 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.172 14:11:19 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.172 14:11:19 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.172 14:11:19 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:37.173 14:11:19 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.173 14:11:19 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.173 --rc genhtml_branch_coverage=1 00:39:37.173 --rc genhtml_function_coverage=1 00:39:37.173 --rc genhtml_legend=1 00:39:37.173 --rc geninfo_all_blocks=1 00:39:37.173 --rc geninfo_unexecuted_blocks=1 00:39:37.173 00:39:37.173 ' 00:39:37.173 14:11:19 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.173 --rc genhtml_branch_coverage=1 00:39:37.173 --rc genhtml_function_coverage=1 00:39:37.173 --rc genhtml_legend=1 00:39:37.173 --rc geninfo_all_blocks=1 00:39:37.173 --rc geninfo_unexecuted_blocks=1 00:39:37.173 00:39:37.173 ' 00:39:37.173 14:11:19 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.173 --rc genhtml_branch_coverage=1 00:39:37.173 --rc genhtml_function_coverage=1 00:39:37.173 --rc genhtml_legend=1 00:39:37.173 --rc geninfo_all_blocks=1 00:39:37.173 --rc geninfo_unexecuted_blocks=1 00:39:37.173 00:39:37.173 ' 00:39:37.173 14:11:19 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.173 --rc genhtml_branch_coverage=1 00:39:37.173 --rc genhtml_function_coverage=1 00:39:37.173 --rc genhtml_legend=1 00:39:37.173 --rc geninfo_all_blocks=1 00:39:37.173 --rc geninfo_unexecuted_blocks=1 00:39:37.173 00:39:37.173 ' 00:39:37.173 14:11:19 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:37.173 14:11:19 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.173 14:11:19 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.173 14:11:19 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.173 14:11:19 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.173 14:11:19 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.173 14:11:19 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:37.173 14:11:19 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:37.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.173 14:11:19 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:37.173 14:11:19 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:37.173 14:11:19 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:37.173 14:11:19 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:37.173 14:11:19 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:37.173 14:11:19 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:37.173 14:11:19 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:37.173 14:11:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:37.173 14:11:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:37.173 14:11:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:37.173 14:11:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:37.173 14:11:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:37.173 14:11:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eW0dNdxUek 00:39:37.173 14:11:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:37.173 14:11:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:37.432 14:11:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eW0dNdxUek 00:39:37.432 14:11:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eW0dNdxUek 00:39:37.432 14:11:19 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.eW0dNdxUek 00:39:37.432 14:11:19 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:37.432 14:11:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:37.432 14:11:19 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:37.432 14:11:19 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:37.432 14:11:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:37.432 14:11:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:37.432 14:11:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tcVVirXTab 00:39:37.432 14:11:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:37.432 14:11:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:37.432 14:11:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:37.432 14:11:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:37.432 14:11:19 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:37.432 14:11:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:37.432 14:11:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:37.432 14:11:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tcVVirXTab 00:39:37.432 14:11:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tcVVirXTab 00:39:37.432 14:11:19 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.tcVVirXTab 00:39:37.432 14:11:19 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:37.432 14:11:19 keyring_file -- keyring/file.sh@30 -- # tgtpid=948029 00:39:37.432 14:11:19 keyring_file -- keyring/file.sh@32 -- # waitforlisten 948029 00:39:37.432 14:11:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 948029 ']' 00:39:37.432 14:11:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.432 14:11:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:37.432 14:11:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.432 14:11:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:37.432 14:11:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:37.432 [2024-12-05 14:11:19.891007] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:39:37.432 [2024-12-05 14:11:19.891053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948029 ] 00:39:37.432 [2024-12-05 14:11:19.964024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.432 [2024-12-05 14:11:20.006241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.691 14:11:20 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:37.691 14:11:20 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:37.691 14:11:20 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:37.691 14:11:20 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.691 14:11:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:37.691 [2024-12-05 14:11:20.232461] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:37.691 null0 00:39:37.691 [2024-12-05 14:11:20.264520] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:37.691 [2024-12-05 14:11:20.264877] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:37.949 14:11:20 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.949 14:11:20 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:37.949 14:11:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:37.949 14:11:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:37.949 14:11:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:37.949 14:11:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:37.949 14:11:20 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:37.949 14:11:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:37.949 14:11:20 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:37.950 14:11:20 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.950 14:11:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:37.950 [2024-12-05 14:11:20.292589] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:37.950 request: 00:39:37.950 { 00:39:37.950 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:37.950 "secure_channel": false, 00:39:37.950 "listen_address": { 00:39:37.950 "trtype": "tcp", 00:39:37.950 "traddr": "127.0.0.1", 00:39:37.950 "trsvcid": "4420" 00:39:37.950 }, 00:39:37.950 "method": "nvmf_subsystem_add_listener", 00:39:37.950 "req_id": 1 00:39:37.950 } 00:39:37.950 Got JSON-RPC error response 00:39:37.950 response: 00:39:37.950 { 00:39:37.950 "code": -32602, 00:39:37.950 "message": "Invalid parameters" 00:39:37.950 } 00:39:37.950 14:11:20 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:37.950 14:11:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:37.950 14:11:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:37.950 14:11:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:37.950 14:11:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:37.950 14:11:20 keyring_file -- keyring/file.sh@47 -- # bperfpid=948036 00:39:37.950 14:11:20 keyring_file -- keyring/file.sh@49 -- # waitforlisten 948036 /var/tmp/bperf.sock 00:39:37.950 14:11:20 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:37.950 14:11:20 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 948036 ']' 00:39:37.950 14:11:20 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:37.950 14:11:20 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:37.950 14:11:20 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:37.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:37.950 14:11:20 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:37.950 14:11:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:37.950 [2024-12-05 14:11:20.344338] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:39:37.950 [2024-12-05 14:11:20.344391] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948036 ] 00:39:37.950 [2024-12-05 14:11:20.417669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.950 [2024-12-05 14:11:20.460116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:38.208 14:11:20 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:38.208 14:11:20 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:38.208 14:11:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eW0dNdxUek 00:39:38.208 14:11:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eW0dNdxUek 00:39:38.208 14:11:20 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tcVVirXTab 00:39:38.208 14:11:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tcVVirXTab 00:39:38.466 14:11:20 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:38.466 14:11:20 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:38.466 14:11:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:38.466 14:11:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:38.466 14:11:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:38.723 14:11:21 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.eW0dNdxUek == \/\t\m\p\/\t\m\p\.\e\W\0\d\N\d\x\U\e\k ]] 00:39:38.723 14:11:21 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:38.723 14:11:21 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:38.723 14:11:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:38.723 14:11:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:38.723 14:11:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:38.993 14:11:21 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.tcVVirXTab == \/\t\m\p\/\t\m\p\.\t\c\V\V\i\r\X\T\a\b ]] 00:39:38.993 14:11:21 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:38.993 14:11:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:38.993 14:11:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:38.993 14:11:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:38.993 14:11:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:38.993 14:11:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:38.993 14:11:21 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:38.993 14:11:21 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:38.993 14:11:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:38.993 14:11:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:38.993 14:11:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:38.993 14:11:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:38.993 14:11:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:39.250 14:11:21 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:39.250 14:11:21 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:39.250 14:11:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:39.507 [2024-12-05 14:11:21.868381] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:39.507 nvme0n1 00:39:39.507 14:11:21 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:39.507 14:11:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:39.507 14:11:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:39.507 14:11:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:39.507 14:11:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:39.507 14:11:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:39.764 14:11:22 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:39.764 14:11:22 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:39.764 14:11:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:39.764 14:11:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:39.764 14:11:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:39.764 14:11:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:39.764 14:11:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:39.764 14:11:22 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:39.764 14:11:22 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:40.022 Running I/O for 1 seconds... 00:39:40.954 19446.00 IOPS, 75.96 MiB/s 00:39:40.954 Latency(us) 00:39:40.954 [2024-12-05T13:11:23.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.954 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:40.954 nvme0n1 : 1.00 19492.22 76.14 0.00 0.00 6554.88 2668.25 13044.78 00:39:40.954 [2024-12-05T13:11:23.541Z] =================================================================================================================== 00:39:40.954 [2024-12-05T13:11:23.541Z] Total : 19492.22 76.14 0.00 0.00 6554.88 2668.25 13044.78 00:39:40.954 { 00:39:40.954 "results": [ 00:39:40.954 { 00:39:40.954 "job": "nvme0n1", 00:39:40.954 "core_mask": "0x2", 00:39:40.954 "workload": "randrw", 00:39:40.954 "percentage": 50, 00:39:40.954 "status": "finished", 00:39:40.954 "queue_depth": 128, 00:39:40.954 "io_size": 4096, 00:39:40.954 "runtime": 1.004247, 00:39:40.954 "iops": 19492.216556285457, 00:39:40.954 "mibps": 76.14147092299007, 00:39:40.954 "io_failed": 0, 00:39:40.954 "io_timeout": 0, 00:39:40.954 "avg_latency_us": 6554.880588505747, 00:39:40.954 "min_latency_us": 2668.2514285714287, 00:39:40.954 "max_latency_us": 13044.784761904762 00:39:40.954 } 00:39:40.954 ], 00:39:40.954 "core_count": 1 00:39:40.954 } 00:39:40.954 14:11:23 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:40.954 14:11:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:41.211 14:11:23 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:41.211 14:11:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:41.211 14:11:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:41.211 14:11:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:41.211 14:11:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:41.211 14:11:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:41.468 14:11:23 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:41.468 14:11:23 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:41.468 14:11:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:41.468 14:11:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:41.468 14:11:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:41.468 14:11:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:41.468 14:11:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:41.468 14:11:24 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:41.468 14:11:24 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:41.468 14:11:24 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:41.468 14:11:24 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:41.468 14:11:24 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:41.468 14:11:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:41.468 14:11:24 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:41.468 14:11:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:41.468 14:11:24 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:41.469 14:11:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:41.727 [2024-12-05 14:11:24.207699] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:41.727 [2024-12-05 14:11:24.208397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacc190 (107): Transport endpoint is not connected 00:39:41.727 [2024-12-05 14:11:24.209391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacc190 (9): Bad file descriptor 00:39:41.727 [2024-12-05 14:11:24.210392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:41.727 [2024-12-05 14:11:24.210401] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:41.727 [2024-12-05 14:11:24.210409] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:41.727 [2024-12-05 14:11:24.210417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:41.727 request: 00:39:41.727 { 00:39:41.727 "name": "nvme0", 00:39:41.727 "trtype": "tcp", 00:39:41.727 "traddr": "127.0.0.1", 00:39:41.727 "adrfam": "ipv4", 00:39:41.727 "trsvcid": "4420", 00:39:41.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:41.727 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:41.727 "prchk_reftag": false, 00:39:41.727 "prchk_guard": false, 00:39:41.727 "hdgst": false, 00:39:41.727 "ddgst": false, 00:39:41.727 "psk": "key1", 00:39:41.727 "allow_unrecognized_csi": false, 00:39:41.727 "method": "bdev_nvme_attach_controller", 00:39:41.727 "req_id": 1 00:39:41.727 } 00:39:41.727 Got JSON-RPC error response 00:39:41.727 response: 00:39:41.727 { 00:39:41.727 "code": -5, 00:39:41.727 "message": "Input/output error" 00:39:41.727 } 00:39:41.727 14:11:24 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:41.727 14:11:24 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:41.727 14:11:24 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:41.727 14:11:24 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:41.727 14:11:24 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:41.727 14:11:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:41.727 14:11:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:41.727 14:11:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:41.727 14:11:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:41.727 14:11:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:41.984 14:11:24 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:41.984 14:11:24 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:41.984 14:11:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:41.984 14:11:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:41.984 14:11:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:41.984 14:11:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:41.984 14:11:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.241 14:11:24 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:42.241 14:11:24 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:42.241 14:11:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:42.498 14:11:24 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:42.498 14:11:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:42.498 14:11:25 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:42.498 14:11:25 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:42.498 14:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.755 14:11:25 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:42.755 14:11:25 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.eW0dNdxUek 00:39:42.755 14:11:25 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.eW0dNdxUek 00:39:42.755 14:11:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:42.755 14:11:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.eW0dNdxUek 00:39:42.755 14:11:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:42.755 14:11:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:42.755 14:11:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:42.755 14:11:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:42.755 14:11:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eW0dNdxUek 00:39:42.755 14:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eW0dNdxUek 00:39:43.011 [2024-12-05 14:11:25.367481] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.eW0dNdxUek': 0100660 00:39:43.011 [2024-12-05 14:11:25.367506] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:43.011 request: 00:39:43.011 { 00:39:43.011 "name": "key0", 00:39:43.011 "path": "/tmp/tmp.eW0dNdxUek", 00:39:43.011 "method": "keyring_file_add_key", 00:39:43.011 "req_id": 1 00:39:43.011 } 00:39:43.011 Got JSON-RPC error response 00:39:43.011 response: 00:39:43.011 { 00:39:43.011 "code": -1, 00:39:43.011 "message": "Operation not permitted" 00:39:43.011 } 00:39:43.011 14:11:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:43.011 14:11:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:43.011 14:11:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:43.011 14:11:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:43.011 14:11:25 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.eW0dNdxUek 00:39:43.011 14:11:25 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eW0dNdxUek 00:39:43.011 14:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eW0dNdxUek 00:39:43.011 14:11:25 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.eW0dNdxUek 00:39:43.011 14:11:25 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:43.011 14:11:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:43.011 14:11:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:43.011 14:11:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:43.012 14:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:43.012 14:11:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:43.268 14:11:25 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:43.268 14:11:25 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:43.268 14:11:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:43.268 14:11:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:43.268 14:11:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:43.268 14:11:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:43.268 14:11:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:43.268 14:11:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:43.268 14:11:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:43.268 14:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:43.525 [2024-12-05 14:11:25.945021] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.eW0dNdxUek': No such file or directory 00:39:43.525 [2024-12-05 14:11:25.945044] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:43.525 [2024-12-05 14:11:25.945059] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:43.525 [2024-12-05 14:11:25.945066] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:43.525 [2024-12-05 14:11:25.945074] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:43.525 [2024-12-05 14:11:25.945080] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:43.525 request: 00:39:43.525 { 00:39:43.525 "name": "nvme0", 00:39:43.525 "trtype": "tcp", 00:39:43.525 "traddr": "127.0.0.1", 00:39:43.525 "adrfam": "ipv4", 00:39:43.526 "trsvcid": "4420", 00:39:43.526 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:43.526 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:43.526 "prchk_reftag": false, 00:39:43.526 "prchk_guard": false, 00:39:43.526 "hdgst": false, 00:39:43.526 "ddgst": false, 00:39:43.526 "psk": "key0", 00:39:43.526 "allow_unrecognized_csi": false, 00:39:43.526 "method": "bdev_nvme_attach_controller", 00:39:43.526 "req_id": 1 00:39:43.526 } 00:39:43.526 Got JSON-RPC error response 00:39:43.526 response: 00:39:43.526 { 00:39:43.526 "code": -19, 00:39:43.526 "message": "No such device" 00:39:43.526 } 00:39:43.526 14:11:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:43.526 14:11:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:43.526 14:11:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:43.526 14:11:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:43.526 14:11:25 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:43.526 14:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:43.783 14:11:26 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:43.783 14:11:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:43.783 14:11:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:43.783 14:11:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:43.783 14:11:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:43.783 14:11:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:43.783 14:11:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NZGDlaZ1P9 00:39:43.783 14:11:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:43.783 14:11:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:43.783 14:11:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:43.783 14:11:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:43.783 14:11:26 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:43.783 14:11:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:43.783 14:11:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:43.783 14:11:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NZGDlaZ1P9 00:39:43.783 14:11:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NZGDlaZ1P9 00:39:43.783 14:11:26 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.NZGDlaZ1P9 00:39:43.783 14:11:26 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NZGDlaZ1P9 00:39:43.783 14:11:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NZGDlaZ1P9 00:39:44.042 14:11:26 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:44.042 14:11:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:44.299 nvme0n1 00:39:44.299 14:11:26 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:44.299 14:11:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:44.299 14:11:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:44.299 14:11:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.299 14:11:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:44.299 14:11:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.299 14:11:26 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:44.299 14:11:26 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:44.299 14:11:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:44.557 14:11:27 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:44.557 14:11:27 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:44.557 14:11:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.557 14:11:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:44.557 14:11:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.814 14:11:27 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:44.814 14:11:27 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:44.814 14:11:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:44.814 14:11:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.814 14:11:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:44.814 14:11:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:44.814 14:11:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:45.071 14:11:27 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:45.072 14:11:27 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:45.072 14:11:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:45.072 14:11:27 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:45.072 14:11:27 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:45.072 14:11:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:45.329 14:11:27 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:45.329 14:11:27 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NZGDlaZ1P9 00:39:45.329 14:11:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NZGDlaZ1P9 00:39:45.586 14:11:28 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tcVVirXTab 00:39:45.586 14:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tcVVirXTab 00:39:45.844 14:11:28 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:45.844 14:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:46.101 nvme0n1 00:39:46.101 14:11:28 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:46.101 14:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:46.360 14:11:28 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:46.360 "subsystems": [ 00:39:46.360 { 00:39:46.360 "subsystem": "keyring", 00:39:46.360 "config": [ 00:39:46.360 { 00:39:46.360 "method": "keyring_file_add_key", 00:39:46.360 "params": { 00:39:46.360 "name": "key0", 00:39:46.360 "path": "/tmp/tmp.NZGDlaZ1P9" 00:39:46.360 } 00:39:46.360 }, 00:39:46.360 { 00:39:46.360 "method": "keyring_file_add_key", 00:39:46.360 "params": { 00:39:46.360 "name": "key1", 00:39:46.360 "path": "/tmp/tmp.tcVVirXTab" 00:39:46.360 } 00:39:46.360 } 00:39:46.360 ] 00:39:46.360 }, 00:39:46.360 { 00:39:46.360 "subsystem": "iobuf", 00:39:46.360 "config": [ 00:39:46.360 { 00:39:46.360 "method": "iobuf_set_options", 00:39:46.360 "params": { 00:39:46.360 "small_pool_count": 8192, 00:39:46.360 "large_pool_count": 1024, 00:39:46.360 "small_bufsize": 8192, 00:39:46.360 "large_bufsize": 135168, 00:39:46.360 "enable_numa": false 00:39:46.360 } 00:39:46.360 } 00:39:46.360 ] 00:39:46.360 }, 00:39:46.360 { 00:39:46.360 "subsystem": "sock", 00:39:46.360 "config": [ 00:39:46.360 { 00:39:46.360 "method": "sock_set_default_impl", 00:39:46.360 "params": { 00:39:46.360 "impl_name": "posix" 00:39:46.360 } 00:39:46.360 }, 00:39:46.360 { 00:39:46.360 "method": "sock_impl_set_options", 00:39:46.360 "params": { 00:39:46.360 "impl_name": "ssl", 00:39:46.360 "recv_buf_size": 4096, 00:39:46.360 "send_buf_size": 4096, 00:39:46.360 "enable_recv_pipe": true, 00:39:46.360 "enable_quickack": false, 00:39:46.360 "enable_placement_id": 0, 00:39:46.360 "enable_zerocopy_send_server": true, 00:39:46.360 "enable_zerocopy_send_client": false, 00:39:46.360 "zerocopy_threshold": 0, 00:39:46.360 "tls_version": 0, 00:39:46.360 "enable_ktls": false 00:39:46.360 } 00:39:46.360 }, 00:39:46.360 { 00:39:46.360 "method": "sock_impl_set_options", 00:39:46.360 "params": { 00:39:46.360 "impl_name": "posix", 00:39:46.360 "recv_buf_size": 2097152, 00:39:46.360 "send_buf_size": 2097152, 00:39:46.360 "enable_recv_pipe": true, 00:39:46.360 "enable_quickack": false, 00:39:46.360 "enable_placement_id": 0, 00:39:46.360 "enable_zerocopy_send_server": true, 00:39:46.360 "enable_zerocopy_send_client": false, 00:39:46.360 "zerocopy_threshold": 0, 00:39:46.360 "tls_version": 0, 00:39:46.360 "enable_ktls": false 00:39:46.360 } 00:39:46.360 } 00:39:46.360 ] 00:39:46.360 }, 00:39:46.360 { 00:39:46.360 "subsystem": "vmd", 00:39:46.360 "config": [] 00:39:46.360 }, 00:39:46.360 { 00:39:46.360 "subsystem": "accel", 00:39:46.360 "config": [ 00:39:46.360 { 00:39:46.360 "method": "accel_set_options", 00:39:46.360 "params": { 00:39:46.360 "small_cache_size": 128, 00:39:46.360 "large_cache_size": 16, 00:39:46.360 "task_count": 2048, 00:39:46.360 "sequence_count": 2048, 00:39:46.360 "buf_count": 2048 00:39:46.360 } 00:39:46.360 } 00:39:46.360 ] 00:39:46.360 }, 00:39:46.360 { 00:39:46.360 "subsystem": "bdev", 00:39:46.360 "config": [ 00:39:46.360 { 00:39:46.360 "method": "bdev_set_options", 00:39:46.360 "params": { 00:39:46.360 "bdev_io_pool_size": 65535, 00:39:46.360 "bdev_io_cache_size": 256, 00:39:46.360 "bdev_auto_examine": true, 00:39:46.360 "iobuf_small_cache_size": 128, 00:39:46.360 "iobuf_large_cache_size": 16 00:39:46.360 } 00:39:46.360 }, 00:39:46.360 { 00:39:46.360 "method": "bdev_raid_set_options", 00:39:46.360 "params": { 00:39:46.360 "process_window_size_kb": 1024, 00:39:46.360 "process_max_bandwidth_mb_sec": 0 00:39:46.360 } 00:39:46.360 }, 00:39:46.360 { 00:39:46.360 "method": "bdev_iscsi_set_options", 00:39:46.360 "params": { 00:39:46.360 "timeout_sec": 30 00:39:46.360 } 00:39:46.360 }, 00:39:46.360 { 00:39:46.360 "method": "bdev_nvme_set_options", 00:39:46.360 "params": { 00:39:46.360 "action_on_timeout": "none", 00:39:46.360 "timeout_us": 0, 00:39:46.360 "timeout_admin_us": 0, 00:39:46.360 "keep_alive_timeout_ms": 10000, 00:39:46.360 "arbitration_burst": 0, 00:39:46.360 "low_priority_weight": 0, 00:39:46.360 "medium_priority_weight": 0, 00:39:46.360 "high_priority_weight": 0, 00:39:46.361 "nvme_adminq_poll_period_us": 10000, 00:39:46.361 "nvme_ioq_poll_period_us": 0, 00:39:46.361 "io_queue_requests": 512, 00:39:46.361 "delay_cmd_submit": true, 00:39:46.361 "transport_retry_count": 4, 00:39:46.361 "bdev_retry_count": 3, 00:39:46.361 "transport_ack_timeout": 0, 00:39:46.361 "ctrlr_loss_timeout_sec": 0, 00:39:46.361 "reconnect_delay_sec": 0, 00:39:46.361 "fast_io_fail_timeout_sec": 0, 00:39:46.361 "disable_auto_failback": false, 00:39:46.361 "generate_uuids": false, 00:39:46.361 "transport_tos": 0, 00:39:46.361 "nvme_error_stat": false, 00:39:46.361 "rdma_srq_size": 0, 00:39:46.361 "io_path_stat": false, 00:39:46.361 "allow_accel_sequence": false, 00:39:46.361 "rdma_max_cq_size": 0, 00:39:46.361 "rdma_cm_event_timeout_ms": 0, 00:39:46.361 "dhchap_digests": [ 00:39:46.361 "sha256", 00:39:46.361 "sha384", 00:39:46.361 "sha512" 00:39:46.361 ], 00:39:46.361 "dhchap_dhgroups": [ 00:39:46.361 "null", 00:39:46.361 "ffdhe2048", 00:39:46.361 "ffdhe3072", 00:39:46.361 "ffdhe4096", 00:39:46.361 "ffdhe6144", 00:39:46.361 "ffdhe8192" 00:39:46.361 ] 00:39:46.361 } 00:39:46.361 }, 00:39:46.361 { 00:39:46.361 "method": "bdev_nvme_attach_controller", 00:39:46.361 "params": { 00:39:46.361 "name": "nvme0", 00:39:46.361 "trtype": "TCP", 00:39:46.361 "adrfam": "IPv4", 00:39:46.361 "traddr": "127.0.0.1", 00:39:46.361 "trsvcid": "4420", 00:39:46.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:46.361 "prchk_reftag": false, 00:39:46.361 "prchk_guard": false, 00:39:46.361 "ctrlr_loss_timeout_sec": 0, 00:39:46.361 "reconnect_delay_sec": 0, 00:39:46.361 "fast_io_fail_timeout_sec": 0, 00:39:46.361 "psk": "key0", 00:39:46.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:46.361 "hdgst": false, 00:39:46.361 "ddgst": false, 00:39:46.361 "multipath": "multipath" 00:39:46.361 } 00:39:46.361 }, 00:39:46.361 { 00:39:46.361 "method": "bdev_nvme_set_hotplug", 00:39:46.361 "params": { 00:39:46.361 "period_us": 100000, 00:39:46.361 "enable": false 00:39:46.361 } 00:39:46.361 }, 00:39:46.361 { 00:39:46.361 "method": "bdev_wait_for_examine" 00:39:46.361 } 00:39:46.361 ] 00:39:46.361 }, 00:39:46.361 { 00:39:46.361 "subsystem": "nbd", 00:39:46.361 "config": [] 00:39:46.361 } 00:39:46.361 ] 00:39:46.361 }' 00:39:46.361 14:11:28 keyring_file -- keyring/file.sh@115 -- # killprocess 948036 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 948036 ']' 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 948036 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 948036 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 948036' 00:39:46.361 killing process with pid 948036 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@973 -- # kill 948036 00:39:46.361 Received shutdown signal, test time was about 1.000000 seconds 00:39:46.361 00:39:46.361 Latency(us) 00:39:46.361 [2024-12-05T13:11:28.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:46.361 [2024-12-05T13:11:28.948Z] =================================================================================================================== 00:39:46.361 [2024-12-05T13:11:28.948Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@978 -- # wait 948036 00:39:46.361 14:11:28 keyring_file -- keyring/file.sh@118 -- # bperfpid=949555 00:39:46.361 14:11:28 keyring_file -- keyring/file.sh@120 -- # waitforlisten 949555 /var/tmp/bperf.sock 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 949555 ']' 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:46.361 14:11:28 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:46.361 14:11:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:46.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:46.361 14:11:28 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:46.361 "subsystems": [ 00:39:46.361 { 00:39:46.361 "subsystem": "keyring", 00:39:46.361 "config": [ 00:39:46.361 { 00:39:46.361 "method": "keyring_file_add_key", 00:39:46.361 "params": { 00:39:46.361 "name": "key0", 00:39:46.361 "path": "/tmp/tmp.NZGDlaZ1P9" 00:39:46.361 } 00:39:46.361 }, 00:39:46.361 { 00:39:46.361 "method": "keyring_file_add_key", 00:39:46.361 "params": { 00:39:46.361 "name": "key1", 00:39:46.361 "path": "/tmp/tmp.tcVVirXTab" 00:39:46.361 } 00:39:46.361 } 00:39:46.361 ] 00:39:46.361 }, 00:39:46.361 { 00:39:46.361 "subsystem": "iobuf", 00:39:46.361 "config": [ 00:39:46.361 { 00:39:46.361 "method": "iobuf_set_options", 00:39:46.361 "params": { 00:39:46.361 "small_pool_count": 8192, 00:39:46.361 "large_pool_count": 1024, 00:39:46.361 "small_bufsize": 8192, 00:39:46.361 "large_bufsize": 135168, 00:39:46.361 "enable_numa": false 00:39:46.361 } 00:39:46.361 } 00:39:46.361 ] 00:39:46.361 }, 00:39:46.361 { 00:39:46.361 "subsystem": "sock", 00:39:46.361 "config": [ 00:39:46.361 { 00:39:46.361 "method": "sock_set_default_impl", 00:39:46.361 "params": { 00:39:46.361 "impl_name": "posix" 00:39:46.361 } 00:39:46.361 }, 00:39:46.361 { 00:39:46.361 "method": "sock_impl_set_options", 00:39:46.361 "params": { 00:39:46.361 "impl_name": "ssl", 00:39:46.361 "recv_buf_size": 4096, 00:39:46.361 "send_buf_size": 4096, 00:39:46.361 "enable_recv_pipe": true, 00:39:46.361 "enable_quickack": false, 00:39:46.361 "enable_placement_id": 0, 00:39:46.361 "enable_zerocopy_send_server": true, 00:39:46.361 "enable_zerocopy_send_client": false, 00:39:46.361 "zerocopy_threshold": 0, 00:39:46.361 "tls_version": 0, 00:39:46.361 "enable_ktls": false 00:39:46.361 } 00:39:46.361 }, 00:39:46.361 { 00:39:46.361 "method": "sock_impl_set_options", 00:39:46.361 "params": { 00:39:46.361 "impl_name": "posix", 00:39:46.361 "recv_buf_size": 2097152, 00:39:46.361 "send_buf_size": 2097152, 00:39:46.361 "enable_recv_pipe": true, 00:39:46.361 "enable_quickack": false, 00:39:46.361 "enable_placement_id": 0, 00:39:46.361 "enable_zerocopy_send_server": true, 00:39:46.361 "enable_zerocopy_send_client": false, 00:39:46.361 "zerocopy_threshold": 0, 00:39:46.361 "tls_version": 0, 00:39:46.361 "enable_ktls": false 00:39:46.361 } 00:39:46.361 } 00:39:46.361 ] 00:39:46.361 }, 00:39:46.361 { 00:39:46.361 "subsystem": "vmd", 00:39:46.361 "config": [] 00:39:46.361 }, 00:39:46.361 { 00:39:46.361 "subsystem": "accel", 00:39:46.361 "config": [ 00:39:46.361 { 00:39:46.361 "method": "accel_set_options", 00:39:46.361 "params": { 00:39:46.361 "small_cache_size": 128, 00:39:46.361 "large_cache_size": 16, 00:39:46.361 "task_count": 2048, 00:39:46.361 "sequence_count": 2048, 00:39:46.361 "buf_count": 2048 00:39:46.361 } 00:39:46.361 } 00:39:46.361 ] 00:39:46.361 }, 00:39:46.361 { 00:39:46.361 "subsystem": "bdev", 00:39:46.361 "config": [ 00:39:46.361 { 00:39:46.361 "method": "bdev_set_options", 00:39:46.361 "params": { 00:39:46.361 "bdev_io_pool_size": 65535, 00:39:46.361 "bdev_io_cache_size": 256, 00:39:46.361 "bdev_auto_examine": true, 00:39:46.361 "iobuf_small_cache_size": 128, 00:39:46.361 "iobuf_large_cache_size": 16 00:39:46.361 } 00:39:46.361 }, 00:39:46.361 { 00:39:46.361 "method": "bdev_raid_set_options", 00:39:46.361 "params": { 00:39:46.361 "process_window_size_kb": 1024, 00:39:46.361 "process_max_bandwidth_mb_sec": 0 00:39:46.361 } 00:39:46.361 }, 00:39:46.362 { 00:39:46.362 "method": "bdev_iscsi_set_options", 00:39:46.362 "params": { 00:39:46.362 "timeout_sec": 30 00:39:46.362 } 00:39:46.362 }, 00:39:46.362 { 00:39:46.362 "method": "bdev_nvme_set_options", 00:39:46.362 "params": { 00:39:46.362 "action_on_timeout": "none", 00:39:46.362 "timeout_us": 0, 00:39:46.362 "timeout_admin_us": 0, 00:39:46.362 "keep_alive_timeout_ms": 10000, 00:39:46.362 "arbitration_burst": 0, 00:39:46.362 "low_priority_weight": 0, 00:39:46.362 "medium_priority_weight": 0, 00:39:46.362 "high_priority_weight": 0, 00:39:46.362 "nvme_adminq_poll_period_us": 10000, 00:39:46.362 "nvme_ioq_poll_period_us": 0, 00:39:46.362 "io_queue_requests": 512, 00:39:46.362 "delay_cmd_submit": true, 00:39:46.362 "transport_retry_count": 4, 00:39:46.362 "bdev_retry_count": 3, 00:39:46.362 "transport_ack_timeout": 0, 00:39:46.362 "ctrlr_loss_timeout_sec": 0, 00:39:46.362 "reconnect_delay_sec": 0, 00:39:46.362 "fast_io_fail_timeout_sec": 0, 00:39:46.362 "disable_auto_failback": false, 00:39:46.362 "generate_uuids": false, 00:39:46.362 "transport_tos": 0, 00:39:46.362 "nvme_error_stat": false, 00:39:46.362 "rdma_srq_size": 0, 00:39:46.362 "io_path_stat": false, 00:39:46.362 "allow_accel_sequence": false, 00:39:46.362 "rdma_max_cq_size": 0, 00:39:46.362 "rdma_cm_event_timeout_ms": 0, 00:39:46.362 "dhchap_digests": [ 00:39:46.362 "sha256", 00:39:46.362 "sha384", 00:39:46.362 "sha512" 00:39:46.362 ], 00:39:46.362 "dhchap_dhgroups": [ 00:39:46.362 "null", 00:39:46.362 "ffdhe2048", 00:39:46.362 "ffdhe3072", 00:39:46.362 "ffdhe4096", 00:39:46.362 "ffdhe6144", 00:39:46.362 "ffdhe8192" 00:39:46.362 ] 00:39:46.362 } 00:39:46.362 }, 00:39:46.362 { 00:39:46.362 "method": "bdev_nvme_attach_controller", 00:39:46.362 "params": { 00:39:46.362 "name": "nvme0", 00:39:46.362 "trtype": "TCP", 00:39:46.362 "adrfam": "IPv4", 00:39:46.362 "traddr": "127.0.0.1", 00:39:46.362 "trsvcid": "4420", 00:39:46.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:46.362 "prchk_reftag": false, 00:39:46.362 "prchk_guard": false, 00:39:46.362 "ctrlr_loss_timeout_sec": 0, 00:39:46.362 "reconnect_delay_sec": 0, 00:39:46.362 "fast_io_fail_timeout_sec": 0, 00:39:46.362 "psk": "key0", 00:39:46.362 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:46.362 "hdgst": false, 00:39:46.362 "ddgst": false, 00:39:46.362 "multipath": "multipath" 00:39:46.362 } 00:39:46.362 }, 00:39:46.362 { 00:39:46.362 "method": "bdev_nvme_set_hotplug", 00:39:46.362 "params": { 00:39:46.362 "period_us": 100000, 00:39:46.362 "enable": false 00:39:46.362 } 00:39:46.362 }, 00:39:46.362 { 00:39:46.362 "method": "bdev_wait_for_examine" 00:39:46.362 } 00:39:46.362 ] 00:39:46.362 }, 00:39:46.362 { 00:39:46.362 "subsystem": "nbd", 00:39:46.362 "config": [] 00:39:46.362 } 00:39:46.362 ] 00:39:46.362 }' 00:39:46.362 14:11:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:46.362 14:11:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:46.621 [2024-12-05 14:11:28.972652] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:39:46.621 [2024-12-05 14:11:28.972701] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949555 ] 00:39:46.621 [2024-12-05 14:11:29.046775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:46.621 [2024-12-05 14:11:29.086456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:46.879 [2024-12-05 14:11:29.248482] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:47.445 14:11:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:47.445 14:11:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:47.445 14:11:29 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:47.445 14:11:29 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:47.445 14:11:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:47.445 14:11:29 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:47.445 14:11:29 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:47.445 14:11:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:47.445 14:11:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:47.445 14:11:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:47.445 14:11:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:47.445 14:11:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:47.703 14:11:30 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:47.703 14:11:30 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:47.703 14:11:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:47.703 14:11:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:47.703 14:11:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:47.703 14:11:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:47.703 14:11:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:47.962 14:11:30 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:47.962 14:11:30 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:47.962 14:11:30 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:47.962 14:11:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:48.220 14:11:30 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:48.220 14:11:30 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:48.220 14:11:30 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.NZGDlaZ1P9 /tmp/tmp.tcVVirXTab 00:39:48.220 14:11:30 keyring_file -- keyring/file.sh@20 -- # killprocess 949555 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 949555 ']' 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@958 -- # kill -0 949555 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 949555 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 949555' 00:39:48.220 killing process with pid 949555 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@973 -- # kill 949555 00:39:48.220 Received shutdown signal, test time was about 1.000000 seconds 00:39:48.220 00:39:48.220 Latency(us) 00:39:48.220 [2024-12-05T13:11:30.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:48.220 [2024-12-05T13:11:30.807Z] =================================================================================================================== 00:39:48.220 [2024-12-05T13:11:30.807Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@978 -- # wait 949555 00:39:48.220 14:11:30 keyring_file -- keyring/file.sh@21 -- # killprocess 948029 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 948029 ']' 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@958 -- # kill -0 948029 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:48.220 14:11:30 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 948029 00:39:48.479 14:11:30 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:48.479 14:11:30 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:48.479 14:11:30 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 948029' 00:39:48.479 killing process with pid 948029 00:39:48.479 14:11:30 keyring_file -- common/autotest_common.sh@973 -- # kill 948029 00:39:48.479 14:11:30 keyring_file -- common/autotest_common.sh@978 -- # wait 948029 00:39:48.738 00:39:48.738 real 0m11.626s 00:39:48.738 user 0m28.779s 00:39:48.738 sys 0m2.686s 00:39:48.738 14:11:31 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:48.738 14:11:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:48.738 ************************************ 00:39:48.738 END TEST keyring_file 00:39:48.738 ************************************ 00:39:48.738 14:11:31 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:39:48.738 14:11:31 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:48.738 14:11:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:48.738 14:11:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:48.738 14:11:31 -- common/autotest_common.sh@10 -- # set +x 00:39:48.738 ************************************ 00:39:48.739 START TEST keyring_linux 00:39:48.739 ************************************ 00:39:48.739 14:11:31 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:48.739 Joined session keyring: 978508991 00:39:48.739 * Looking for test storage... 00:39:48.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:48.739 14:11:31 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:48.739 14:11:31 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:39:48.739 14:11:31 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:48.998 14:11:31 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:48.998 14:11:31 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:48.998 14:11:31 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:48.998 14:11:31 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:48.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.998 --rc genhtml_branch_coverage=1 00:39:48.998 --rc genhtml_function_coverage=1 00:39:48.998 --rc genhtml_legend=1 00:39:48.998 --rc geninfo_all_blocks=1 00:39:48.998 --rc geninfo_unexecuted_blocks=1 00:39:48.998 00:39:48.998 ' 00:39:48.998 14:11:31 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:48.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.998 --rc genhtml_branch_coverage=1 00:39:48.998 --rc genhtml_function_coverage=1 00:39:48.998 --rc genhtml_legend=1 00:39:48.998 --rc geninfo_all_blocks=1 00:39:48.998 --rc geninfo_unexecuted_blocks=1 00:39:48.998 00:39:48.998 ' 00:39:48.998 14:11:31 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:48.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.998 --rc genhtml_branch_coverage=1 00:39:48.998 --rc genhtml_function_coverage=1 00:39:48.998 --rc genhtml_legend=1 00:39:48.998 --rc geninfo_all_blocks=1 00:39:48.998 --rc geninfo_unexecuted_blocks=1 00:39:48.998 00:39:48.998 ' 00:39:48.998 14:11:31 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:48.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.998 --rc genhtml_branch_coverage=1 00:39:48.998 --rc genhtml_function_coverage=1 00:39:48.998 --rc genhtml_legend=1 00:39:48.998 --rc geninfo_all_blocks=1 00:39:48.998 --rc geninfo_unexecuted_blocks=1 00:39:48.998 00:39:48.998 ' 00:39:48.998 14:11:31 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:48.998 14:11:31 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:48.998 14:11:31 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:48.998 14:11:31 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:48.998 14:11:31 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:48.998 14:11:31 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:48.998 14:11:31 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:48.998 14:11:31 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:48.998 14:11:31 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:48.998 14:11:31 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:48.998 14:11:31 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:48.998 14:11:31 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:48.998 14:11:31 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:48.998 14:11:31 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:39:48.998 14:11:31 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:48.999 14:11:31 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:48.999 14:11:31 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:48.999 14:11:31 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:48.999 14:11:31 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:48.999 14:11:31 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.999 14:11:31 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.999 14:11:31 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.999 14:11:31 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:48.999 14:11:31 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:48.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:48.999 14:11:31 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:48.999 14:11:31 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:48.999 14:11:31 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:48.999 14:11:31 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:48.999 14:11:31 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:48.999 14:11:31 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:48.999 /tmp/:spdk-test:key0 00:39:48.999 14:11:31 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:48.999 14:11:31 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:48.999 14:11:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:48.999 /tmp/:spdk-test:key1 00:39:48.999 14:11:31 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=950104 00:39:48.999 14:11:31 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:48.999 14:11:31 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 950104 00:39:48.999 14:11:31 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 950104 ']' 00:39:48.999 14:11:31 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:48.999 14:11:31 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:48.999 14:11:31 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:48.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:48.999 14:11:31 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:48.999 14:11:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:48.999 [2024-12-05 14:11:31.555642] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:39:48.999 [2024-12-05 14:11:31.555691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950104 ] 00:39:49.257 [2024-12-05 14:11:31.628092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.257 [2024-12-05 14:11:31.667064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.515 14:11:31 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:49.515 14:11:31 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:49.515 14:11:31 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:49.515 14:11:31 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.515 14:11:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:49.515 [2024-12-05 14:11:31.898149] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:49.515 null0 00:39:49.515 [2024-12-05 14:11:31.930200] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:49.515 [2024-12-05 14:11:31.930578] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:49.515 14:11:31 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.515 14:11:31 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:49.515 359685629 00:39:49.515 14:11:31 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:49.515 340043505 00:39:49.515 14:11:31 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=950118 00:39:49.515 14:11:31 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 950118 /var/tmp/bperf.sock 00:39:49.515 14:11:31 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:49.515 14:11:31 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 950118 ']' 00:39:49.515 14:11:31 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:49.515 14:11:31 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:49.515 14:11:31 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:49.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:49.515 14:11:31 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:49.515 14:11:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:49.515 [2024-12-05 14:11:32.003538] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:39:49.515 [2024-12-05 14:11:32.003582] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950118 ] 00:39:49.515 [2024-12-05 14:11:32.074487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.772 [2024-12-05 14:11:32.117056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:49.772 14:11:32 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:49.772 14:11:32 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:49.772 14:11:32 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:49.772 14:11:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:49.772 14:11:32 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:49.772 14:11:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:50.030 14:11:32 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:50.030 14:11:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:50.288 [2024-12-05 14:11:32.765544] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:50.288 nvme0n1 00:39:50.288 14:11:32 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:50.288 14:11:32 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:50.288 14:11:32 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:50.288 14:11:32 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:50.288 14:11:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.288 14:11:32 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:50.546 14:11:33 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:50.546 14:11:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:50.546 14:11:33 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:50.546 14:11:33 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:50.546 14:11:33 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:50.546 14:11:33 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:50.546 14:11:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.830 14:11:33 keyring_linux -- keyring/linux.sh@25 -- # sn=359685629 00:39:50.830 14:11:33 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:50.830 14:11:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:50.830 14:11:33 keyring_linux -- keyring/linux.sh@26 -- # [[ 359685629 == \3\5\9\6\8\5\6\2\9 ]] 00:39:50.830 14:11:33 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 359685629 00:39:50.830 14:11:33 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:50.830 14:11:33 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:50.830 Running I/O for 1 seconds... 00:39:51.761 21810.00 IOPS, 85.20 MiB/s 00:39:51.761 Latency(us) 00:39:51.761 [2024-12-05T13:11:34.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.761 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:51.761 nvme0n1 : 1.01 21811.73 85.20 0.00 0.00 5849.52 3370.42 8426.06 00:39:51.761 [2024-12-05T13:11:34.348Z] =================================================================================================================== 00:39:51.761 [2024-12-05T13:11:34.348Z] Total : 21811.73 85.20 0.00 0.00 5849.52 3370.42 8426.06 00:39:51.761 { 00:39:51.761 "results": [ 00:39:51.761 { 00:39:51.761 "job": "nvme0n1", 00:39:51.761 "core_mask": "0x2", 00:39:51.761 "workload": "randread", 00:39:51.761 "status": "finished", 00:39:51.761 "queue_depth": 128, 00:39:51.761 "io_size": 4096, 00:39:51.761 "runtime": 1.005789, 00:39:51.761 "iops": 21811.73188412281, 00:39:51.761 "mibps": 85.20207767235473, 00:39:51.761 "io_failed": 0, 00:39:51.761 "io_timeout": 0, 00:39:51.761 "avg_latency_us": 5849.521304802713, 00:39:51.761 "min_latency_us": 3370.422857142857, 00:39:51.761 "max_latency_us": 8426.057142857142 00:39:51.761 } 00:39:51.761 ], 00:39:51.761 "core_count": 1 00:39:51.761 } 00:39:52.019 14:11:34 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:52.019 14:11:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:52.019 14:11:34 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:52.019 14:11:34 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:52.019 14:11:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:52.019 14:11:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:52.019 14:11:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:52.019 14:11:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:52.277 14:11:34 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:52.277 14:11:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:52.277 14:11:34 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:52.277 14:11:34 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:52.277 14:11:34 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:39:52.277 14:11:34 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:52.277 14:11:34 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:52.277 14:11:34 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:52.277 14:11:34 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:52.277 14:11:34 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:52.277 14:11:34 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:52.277 14:11:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:52.535 [2024-12-05 14:11:34.949208] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:52.535 [2024-12-05 14:11:34.949830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf6fa0 (107): Transport endpoint is not connected 00:39:52.535 [2024-12-05 14:11:34.950824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf6fa0 (9): Bad file descriptor 00:39:52.535 [2024-12-05 14:11:34.951826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:52.535 [2024-12-05 14:11:34.951835] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:52.535 [2024-12-05 14:11:34.951843] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:52.535 [2024-12-05 14:11:34.951850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:52.535 request: 00:39:52.535 { 00:39:52.535 "name": "nvme0", 00:39:52.535 "trtype": "tcp", 00:39:52.535 "traddr": "127.0.0.1", 00:39:52.535 "adrfam": "ipv4", 00:39:52.535 "trsvcid": "4420", 00:39:52.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:52.535 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:52.535 "prchk_reftag": false, 00:39:52.535 "prchk_guard": false, 00:39:52.535 "hdgst": false, 00:39:52.535 "ddgst": false, 00:39:52.535 "psk": ":spdk-test:key1", 00:39:52.535 "allow_unrecognized_csi": false, 00:39:52.535 "method": "bdev_nvme_attach_controller", 00:39:52.535 "req_id": 1 00:39:52.535 } 00:39:52.535 Got JSON-RPC error response 00:39:52.535 response: 00:39:52.535 { 00:39:52.535 "code": -5, 00:39:52.535 "message": "Input/output error" 00:39:52.535 } 00:39:52.535 14:11:34 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:39:52.535 14:11:34 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:52.535 14:11:34 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:52.535 14:11:34 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@33 -- # sn=359685629 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 359685629 00:39:52.535 1 links removed 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@33 -- # sn=340043505 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 340043505 00:39:52.535 1 links removed 00:39:52.535 14:11:34 keyring_linux -- keyring/linux.sh@41 -- # killprocess 950118 00:39:52.535 14:11:34 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 950118 ']' 00:39:52.535 14:11:34 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 950118 00:39:52.535 14:11:34 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:52.535 14:11:34 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:52.535 14:11:34 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 950118 00:39:52.535 14:11:35 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:52.535 14:11:35 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:52.535 14:11:35 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 950118' 00:39:52.535 killing process with pid 950118 00:39:52.535 14:11:35 keyring_linux -- common/autotest_common.sh@973 -- # kill 950118 00:39:52.535 Received shutdown signal, test time was about 1.000000 seconds 00:39:52.535 00:39:52.535 Latency(us) 00:39:52.535 [2024-12-05T13:11:35.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:52.535 [2024-12-05T13:11:35.122Z] =================================================================================================================== 00:39:52.535 [2024-12-05T13:11:35.122Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:52.535 14:11:35 keyring_linux -- common/autotest_common.sh@978 -- # wait 950118 00:39:52.792 14:11:35 keyring_linux -- keyring/linux.sh@42 -- # killprocess 950104 00:39:52.792 14:11:35 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 950104 ']' 00:39:52.792 14:11:35 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 950104 00:39:52.792 14:11:35 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:52.792 14:11:35 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:52.792 14:11:35 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 950104 00:39:52.792 14:11:35 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:52.792 14:11:35 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:52.792 14:11:35 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 950104' 00:39:52.792 killing process with pid 950104 00:39:52.792 14:11:35 keyring_linux -- common/autotest_common.sh@973 -- # kill 950104 00:39:52.792 14:11:35 keyring_linux -- common/autotest_common.sh@978 -- # wait 950104 00:39:53.049 00:39:53.049 real 0m4.325s 00:39:53.049 user 0m8.126s 00:39:53.049 sys 0m1.448s 00:39:53.049 14:11:35 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:53.049 14:11:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:53.049 ************************************ 00:39:53.049 END TEST keyring_linux 00:39:53.049 ************************************ 00:39:53.049 14:11:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:53.049 14:11:35 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:53.049 14:11:35 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:39:53.049 14:11:35 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:39:53.049 14:11:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:39:53.049 14:11:35 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:53.049 14:11:35 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:53.049 14:11:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:53.049 14:11:35 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:39:53.049 14:11:35 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:53.049 14:11:35 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:39:53.049 14:11:35 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:53.049 14:11:35 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:53.049 14:11:35 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:39:53.049 14:11:35 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:39:53.049 14:11:35 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:39:53.049 14:11:35 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:39:53.049 14:11:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:53.049 14:11:35 -- common/autotest_common.sh@10 -- # set +x 00:39:53.049 14:11:35 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:39:53.049 14:11:35 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:39:53.049 14:11:35 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:39:53.049 14:11:35 -- common/autotest_common.sh@10 -- # set +x 00:39:58.332 INFO: APP EXITING 00:39:58.332 INFO: killing all VMs 00:39:58.332 INFO: killing vhost app 00:39:58.332 INFO: EXIT DONE 00:40:00.865 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:40:00.865 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:40:00.865 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:40:00.865 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:40:00.865 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:40:00.865 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:40:00.865 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:40:00.865 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:40:00.865 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:40:00.865 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:40:00.865 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:40:00.865 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:40:00.865 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:40:00.865 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:40:01.124 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:40:01.124 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:40:01.124 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:40:04.413 Cleaning 00:40:04.413 Removing: /var/run/dpdk/spdk0/config 00:40:04.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:04.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:04.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:04.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:04.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:04.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:04.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:04.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:04.413 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:04.413 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:04.413 Removing: /var/run/dpdk/spdk1/config 00:40:04.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:04.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:04.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:04.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:04.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:04.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:04.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:04.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:04.413 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:04.413 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:04.413 Removing: /var/run/dpdk/spdk2/config 00:40:04.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:04.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:04.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:04.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:04.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:04.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:04.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:04.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:04.413 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:04.413 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:04.413 Removing: /var/run/dpdk/spdk3/config 00:40:04.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:04.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:04.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:04.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:04.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:04.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:04.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:04.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:04.413 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:04.413 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:04.413 Removing: /var/run/dpdk/spdk4/config 00:40:04.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:04.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:04.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:04.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:04.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:04.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:04.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:04.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:04.413 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:04.413 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:04.413 Removing: /dev/shm/bdev_svc_trace.1 00:40:04.413 Removing: /dev/shm/nvmf_trace.0 00:40:04.413 Removing: /dev/shm/spdk_tgt_trace.pid470778 00:40:04.413 Removing: /var/run/dpdk/spdk0 00:40:04.413 Removing: /var/run/dpdk/spdk1 00:40:04.413 Removing: /var/run/dpdk/spdk2 00:40:04.413 Removing: /var/run/dpdk/spdk3 00:40:04.413 Removing: /var/run/dpdk/spdk4 00:40:04.413 Removing: /var/run/dpdk/spdk_pid468408 00:40:04.413 Removing: /var/run/dpdk/spdk_pid469473 00:40:04.413 Removing: /var/run/dpdk/spdk_pid470778 00:40:04.413 Removing: /var/run/dpdk/spdk_pid471326 00:40:04.413 Removing: /var/run/dpdk/spdk_pid472266 00:40:04.413 Removing: /var/run/dpdk/spdk_pid472387 00:40:04.413 Removing: /var/run/dpdk/spdk_pid473360 00:40:04.413 Removing: /var/run/dpdk/spdk_pid473505 00:40:04.413 Removing: /var/run/dpdk/spdk_pid473731 00:40:04.413 Removing: /var/run/dpdk/spdk_pid475468 00:40:04.413 Removing: /var/run/dpdk/spdk_pid476745 00:40:04.413 Removing: /var/run/dpdk/spdk_pid477030 00:40:04.413 Removing: /var/run/dpdk/spdk_pid477317 00:40:04.413 Removing: /var/run/dpdk/spdk_pid477638 00:40:04.413 Removing: /var/run/dpdk/spdk_pid477928 00:40:04.413 Removing: /var/run/dpdk/spdk_pid478180 00:40:04.413 Removing: /var/run/dpdk/spdk_pid478431 00:40:04.414 Removing: /var/run/dpdk/spdk_pid478715 00:40:04.414 Removing: /var/run/dpdk/spdk_pid479457 00:40:04.414 Removing: /var/run/dpdk/spdk_pid482452 00:40:04.414 Removing: /var/run/dpdk/spdk_pid482710 00:40:04.414 Removing: /var/run/dpdk/spdk_pid482964 00:40:04.414 Removing: /var/run/dpdk/spdk_pid482973 00:40:04.414 Removing: /var/run/dpdk/spdk_pid483468 00:40:04.414 Removing: /var/run/dpdk/spdk_pid483473 00:40:04.414 Removing: /var/run/dpdk/spdk_pid483961 00:40:04.414 Removing: /var/run/dpdk/spdk_pid483970 00:40:04.414 Removing: /var/run/dpdk/spdk_pid484231 00:40:04.414 Removing: /var/run/dpdk/spdk_pid484252 00:40:04.414 Removing: /var/run/dpdk/spdk_pid484509 00:40:04.414 Removing: /var/run/dpdk/spdk_pid484580 00:40:04.414 Removing: /var/run/dpdk/spdk_pid485082 00:40:04.414 Removing: /var/run/dpdk/spdk_pid485329 00:40:04.414 Removing: /var/run/dpdk/spdk_pid485624 00:40:04.414 Removing: /var/run/dpdk/spdk_pid489337 00:40:04.414 Removing: /var/run/dpdk/spdk_pid493800 00:40:04.414 Removing: /var/run/dpdk/spdk_pid504362 00:40:04.414 Removing: /var/run/dpdk/spdk_pid505054 00:40:04.414 Removing: /var/run/dpdk/spdk_pid509326 00:40:04.414 Removing: /var/run/dpdk/spdk_pid509580 00:40:04.414 Removing: /var/run/dpdk/spdk_pid513845 00:40:04.414 Removing: /var/run/dpdk/spdk_pid519741 00:40:04.414 Removing: /var/run/dpdk/spdk_pid522417 00:40:04.414 Removing: /var/run/dpdk/spdk_pid532608 00:40:04.414 Removing: /var/run/dpdk/spdk_pid541708 00:40:04.414 Removing: /var/run/dpdk/spdk_pid543390 00:40:04.414 Removing: /var/run/dpdk/spdk_pid544373 00:40:04.414 Removing: /var/run/dpdk/spdk_pid561922 00:40:04.414 Removing: /var/run/dpdk/spdk_pid565953 00:40:04.414 Removing: /var/run/dpdk/spdk_pid612153 00:40:04.414 Removing: /var/run/dpdk/spdk_pid617543 00:40:04.414 Removing: /var/run/dpdk/spdk_pid623373 00:40:04.414 Removing: /var/run/dpdk/spdk_pid630044 00:40:04.414 Removing: /var/run/dpdk/spdk_pid630060 00:40:04.414 Removing: /var/run/dpdk/spdk_pid630846 00:40:04.414 Removing: /var/run/dpdk/spdk_pid631674 00:40:04.414 Removing: /var/run/dpdk/spdk_pid632587 00:40:04.414 Removing: /var/run/dpdk/spdk_pid633082 00:40:04.414 Removing: /var/run/dpdk/spdk_pid633277 00:40:04.414 Removing: /var/run/dpdk/spdk_pid633507 00:40:04.414 Removing: /var/run/dpdk/spdk_pid633523 00:40:04.414 Removing: /var/run/dpdk/spdk_pid633525 00:40:04.414 Removing: /var/run/dpdk/spdk_pid634436 00:40:04.414 Removing: /var/run/dpdk/spdk_pid635351 00:40:04.414 Removing: /var/run/dpdk/spdk_pid636271 00:40:04.414 Removing: /var/run/dpdk/spdk_pid636737 00:40:04.414 Removing: /var/run/dpdk/spdk_pid636739 00:40:04.414 Removing: /var/run/dpdk/spdk_pid637067 00:40:04.414 Removing: /var/run/dpdk/spdk_pid638214 00:40:04.414 Removing: /var/run/dpdk/spdk_pid639194 00:40:04.414 Removing: /var/run/dpdk/spdk_pid647803 00:40:04.414 Removing: /var/run/dpdk/spdk_pid676230 00:40:04.414 Removing: /var/run/dpdk/spdk_pid681043 00:40:04.414 Removing: /var/run/dpdk/spdk_pid683054 00:40:04.414 Removing: /var/run/dpdk/spdk_pid684890 00:40:04.414 Removing: /var/run/dpdk/spdk_pid684907 00:40:04.414 Removing: /var/run/dpdk/spdk_pid685143 00:40:04.414 Removing: /var/run/dpdk/spdk_pid685284 00:40:04.414 Removing: /var/run/dpdk/spdk_pid685719 00:40:04.414 Removing: /var/run/dpdk/spdk_pid687500 00:40:04.414 Removing: /var/run/dpdk/spdk_pid688340 00:40:04.414 Removing: /var/run/dpdk/spdk_pid688758 00:40:04.414 Removing: /var/run/dpdk/spdk_pid691072 00:40:04.414 Removing: /var/run/dpdk/spdk_pid691438 00:40:04.414 Removing: /var/run/dpdk/spdk_pid692074 00:40:04.414 Removing: /var/run/dpdk/spdk_pid696120 00:40:04.414 Removing: /var/run/dpdk/spdk_pid701513 00:40:04.414 Removing: /var/run/dpdk/spdk_pid701514 00:40:04.414 Removing: /var/run/dpdk/spdk_pid701516 00:40:04.414 Removing: /var/run/dpdk/spdk_pid705308 00:40:04.414 Removing: /var/run/dpdk/spdk_pid713859 00:40:04.414 Removing: /var/run/dpdk/spdk_pid717666 00:40:04.414 Removing: /var/run/dpdk/spdk_pid723895 00:40:04.414 Removing: /var/run/dpdk/spdk_pid725378 00:40:04.414 Removing: /var/run/dpdk/spdk_pid727012 00:40:04.414 Removing: /var/run/dpdk/spdk_pid728343 00:40:04.414 Removing: /var/run/dpdk/spdk_pid733038 00:40:04.414 Removing: /var/run/dpdk/spdk_pid737372 00:40:04.414 Removing: /var/run/dpdk/spdk_pid741317 00:40:04.414 Removing: /var/run/dpdk/spdk_pid748768 00:40:04.414 Removing: /var/run/dpdk/spdk_pid748774 00:40:04.414 Removing: /var/run/dpdk/spdk_pid753486 00:40:04.414 Removing: /var/run/dpdk/spdk_pid753720 00:40:04.414 Removing: /var/run/dpdk/spdk_pid753950 00:40:04.414 Removing: /var/run/dpdk/spdk_pid754284 00:40:04.414 Removing: /var/run/dpdk/spdk_pid754410 00:40:04.414 Removing: /var/run/dpdk/spdk_pid759081 00:40:04.414 Removing: /var/run/dpdk/spdk_pid759539 00:40:04.674 Removing: /var/run/dpdk/spdk_pid764019 00:40:04.674 Removing: /var/run/dpdk/spdk_pid766591 00:40:04.674 Removing: /var/run/dpdk/spdk_pid771981 00:40:04.674 Removing: /var/run/dpdk/spdk_pid777632 00:40:04.674 Removing: /var/run/dpdk/spdk_pid786612 00:40:04.674 Removing: /var/run/dpdk/spdk_pid793600 00:40:04.674 Removing: /var/run/dpdk/spdk_pid793602 00:40:04.674 Removing: /var/run/dpdk/spdk_pid812387 00:40:04.674 Removing: /var/run/dpdk/spdk_pid812861 00:40:04.674 Removing: /var/run/dpdk/spdk_pid813481 00:40:04.674 Removing: /var/run/dpdk/spdk_pid814027 00:40:04.674 Removing: /var/run/dpdk/spdk_pid814764 00:40:04.674 Removing: /var/run/dpdk/spdk_pid815241 00:40:04.674 Removing: /var/run/dpdk/spdk_pid815896 00:40:04.674 Removing: /var/run/dpdk/spdk_pid816407 00:40:04.674 Removing: /var/run/dpdk/spdk_pid820510 00:40:04.674 Removing: /var/run/dpdk/spdk_pid820797 00:40:04.674 Removing: /var/run/dpdk/spdk_pid827322 00:40:04.674 Removing: /var/run/dpdk/spdk_pid827526 00:40:04.674 Removing: /var/run/dpdk/spdk_pid832889 00:40:04.674 Removing: /var/run/dpdk/spdk_pid837019 00:40:04.674 Removing: /var/run/dpdk/spdk_pid846986 00:40:04.674 Removing: /var/run/dpdk/spdk_pid847461 00:40:04.674 Removing: /var/run/dpdk/spdk_pid851712 00:40:04.674 Removing: /var/run/dpdk/spdk_pid851961 00:40:04.674 Removing: /var/run/dpdk/spdk_pid856201 00:40:04.674 Removing: /var/run/dpdk/spdk_pid862014 00:40:04.674 Removing: /var/run/dpdk/spdk_pid864468 00:40:04.674 Removing: /var/run/dpdk/spdk_pid875089 00:40:04.674 Removing: /var/run/dpdk/spdk_pid883758 00:40:04.674 Removing: /var/run/dpdk/spdk_pid885371 00:40:04.674 Removing: /var/run/dpdk/spdk_pid886288 00:40:04.674 Removing: /var/run/dpdk/spdk_pid902413 00:40:04.674 Removing: /var/run/dpdk/spdk_pid906225 00:40:04.674 Removing: /var/run/dpdk/spdk_pid908912 00:40:04.674 Removing: /var/run/dpdk/spdk_pid917477 00:40:04.674 Removing: /var/run/dpdk/spdk_pid917619 00:40:04.674 Removing: /var/run/dpdk/spdk_pid922667 00:40:04.674 Removing: /var/run/dpdk/spdk_pid924627 00:40:04.674 Removing: /var/run/dpdk/spdk_pid926593 00:40:04.674 Removing: /var/run/dpdk/spdk_pid927641 00:40:04.674 Removing: /var/run/dpdk/spdk_pid929622 00:40:04.674 Removing: /var/run/dpdk/spdk_pid930793 00:40:04.674 Removing: /var/run/dpdk/spdk_pid939649 00:40:04.674 Removing: /var/run/dpdk/spdk_pid940129 00:40:04.674 Removing: /var/run/dpdk/spdk_pid940774 00:40:04.674 Removing: /var/run/dpdk/spdk_pid943055 00:40:04.674 Removing: /var/run/dpdk/spdk_pid943518 00:40:04.674 Removing: /var/run/dpdk/spdk_pid944019 00:40:04.674 Removing: /var/run/dpdk/spdk_pid948029 00:40:04.674 Removing: /var/run/dpdk/spdk_pid948036 00:40:04.674 Removing: /var/run/dpdk/spdk_pid949555 00:40:04.674 Removing: /var/run/dpdk/spdk_pid950104 00:40:04.674 Removing: /var/run/dpdk/spdk_pid950118 00:40:04.674 Clean 00:40:04.933 14:11:47 -- common/autotest_common.sh@1453 -- # return 0 00:40:04.933 14:11:47 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:40:04.933 14:11:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:04.933 14:11:47 -- common/autotest_common.sh@10 -- # set +x 00:40:04.933 14:11:47 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:40:04.933 14:11:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:04.933 14:11:47 -- common/autotest_common.sh@10 -- # set +x 00:40:04.933 14:11:47 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:04.933 14:11:47 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:04.933 14:11:47 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:04.933 14:11:47 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:40:04.933 14:11:47 -- spdk/autotest.sh@398 -- # hostname 00:40:04.933 14:11:47 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:05.192 geninfo: WARNING: invalid characters removed from testname! 00:40:27.135 14:12:08 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:29.041 14:12:11 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:30.421 14:12:12 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:32.326 14:12:14 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:34.242 14:12:16 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:36.302 14:12:18 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:38.206 14:12:20 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:38.206 14:12:20 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:38.206 14:12:20 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:38.206 14:12:20 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:38.206 14:12:20 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:38.206 14:12:20 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:38.206 + [[ -n 390922 ]] 00:40:38.206 + sudo kill 390922 00:40:38.215 [Pipeline] } 00:40:38.231 [Pipeline] // stage 00:40:38.238 [Pipeline] } 00:40:38.255 [Pipeline] // timeout 00:40:38.262 [Pipeline] } 00:40:38.275 [Pipeline] // catchError 00:40:38.281 [Pipeline] } 00:40:38.296 [Pipeline] // wrap 00:40:38.303 [Pipeline] } 00:40:38.314 [Pipeline] // catchError 00:40:38.323 [Pipeline] stage 00:40:38.324 [Pipeline] { (Epilogue) 00:40:38.332 [Pipeline] catchError 00:40:38.333 [Pipeline] { 00:40:38.340 [Pipeline] echo 00:40:38.341 Cleanup processes 00:40:38.345 [Pipeline] sh 00:40:38.625 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:38.625 961284 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:38.640 [Pipeline] sh 00:40:38.930 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:38.930 ++ grep -v 'sudo pgrep' 00:40:38.930 ++ awk '{print $1}' 00:40:38.930 + sudo kill -9 00:40:38.930 + true 00:40:38.942 [Pipeline] sh 00:40:39.225 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:51.453 [Pipeline] sh 00:40:51.739 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:51.739 Artifacts sizes are good 00:40:51.755 [Pipeline] archiveArtifacts 00:40:51.763 Archiving artifacts 00:40:51.894 [Pipeline] sh 00:40:52.180 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:52.196 [Pipeline] cleanWs 00:40:52.207 [WS-CLEANUP] Deleting project workspace... 00:40:52.207 [WS-CLEANUP] Deferred wipeout is used... 00:40:52.214 [WS-CLEANUP] done 00:40:52.216 [Pipeline] } 00:40:52.235 [Pipeline] // catchError 00:40:52.249 [Pipeline] sh 00:40:52.532 + logger -p user.info -t JENKINS-CI 00:40:52.541 [Pipeline] } 00:40:52.556 [Pipeline] // stage 00:40:52.561 [Pipeline] } 00:40:52.577 [Pipeline] // node 00:40:52.582 [Pipeline] End of Pipeline 00:40:52.620 Finished: SUCCESS